Soralynx
Back to Blog
Ethics25 min read

Ethical AI: Building Trust Through Transparency

Soralynx Ethics Board
December 22, 2022

The Need for Openness

AI is too important to be kept behind closed doors (proprietary models). Open Source drives innovation and safety. By sharing our models—like the Llama series—we allow the global community to stress-test them, improve them, and build incredible applications we never dreamed of.

The Myth of "Existential Risk"

There is a lot of fear-mongering about AI "taking over." I believe this is misplaced.

  • Intelligence != Dominance: Just because something is smart doesn't mean it wants to conquer.
  • Hard Control: We build these machines. They run on our hardware. We set their objectives.

Interpreting the Black Box

Large Language Models (LLMs) are impressive, but they can be opaque and prone to hallucination because they are auto-regressive (predicting the next word). They don't "know" anything; they just know probability. To build truly ethical and reliable systems, we need architectures that ground them in reality.

  • JEPA (Joint Embedding Predictive Architecture): I propose this new architecture. It allows the system to have a "World Model"—an internal simulation of cause and effect.
  • It learns like a baby: by observation, not just by reading millions of books.

Bias and Fairness

Bias is not just a data problem; it's a transparency problem. If only a few companies control the models, the biases of those companies become the biases of the world.

"Open research is the only way to ensure AI systems are robust, fair, and safe for everyone."

We must democratize access to these powerful tools to ensure a diverse set of values are represented globally, not just the values of Silicon Valley.

AI Enterprise Technology