Skip to content

nous logo

The open-source TypeScript platform for AI agents and LLM based workflows

The Ancient Greek word sophía (σοφία) variously translates to "clever, skillful, intelligent, wise"

The Sophia platform provides a complete out-of-the box experience for building AI agents and LLM based workflows with TypeScript.

  • Autonomous agents
  • Software developer agents
  • Code review agents
  • Chat interface
  • Chatbots (Slack integration provided)
  • Functional callable tools (Filesystem, Jira, Slack, Perplexity, Google Cloud, Gitlab, GitHub and more)
  • CLI and Web UI interface
  • Run locally or deployed on the cloud with multi-user/SSO
  • OpenTelemetry observability

Autonomous agents

  • Reasoning/planning inspired from Google's Self-Discover and other papers
  • Memory and function call history for complex workflows
  • Iterative planning with hierarchical task decomposition
  • Sandboxed execution of generated code for multi-step function calling and logic
  • LLM function schemas auto-generated from source code
  • Human-in-the-loop for budget control, agent initiated questions and error handling

Software developer agents

  • Code Editing Agent:
  • Auto-detection of project initialization, compile, test and lint
  • Task file selection agent
  • Code editing loop with compile, lint, test, fix (editing delegates to Aider)
    • Compile error analyser can search online, add additional files and packages
  • Review the changes with an additional code editing loop if required.
  • Software Engineer Agent (For issue to Pull Request workflow):
  • Find the appropriate repository from GitLab/GitHub
  • Clone and create branch
  • Call the Code Editing Agent
  • Create merge request
  • Code review agents
  • Query repository agent

Chatbots

  • Slack chatbot

Flexible run/deploy options

  • CLI interface
  • Web interface
  • Scale-to-zero deployment on Firestore & Cloud Run
  • Multi-user SSO enterprise deployment (with Google Cloud IAP)

LLM support

OpenAI, Anthropic (native & Vertex), Gemini, Groq, Fireworks, Together.ai, DeepSeek, Ollama, Cerebras

  • Filesystem, Jira, Slack, Perplexity, Gitlab and more

  • Observability with OpenTelemetry tracing

  • Code Review agent:

    • Configurable code review guidelines
    • Posts comments on GitLab merge requests at the appropriate line with suggested changes

UI Examples

New Agent

New Agent UI

Sample trace

Sample trace in Google Cloud

Human in the loop notification

Agent requested feedback

Feedback requested

List agents

List agents

Code review configuration

Code review configuration

Code Examples

Sophia vs LangChain

Sophia doesn't use LangChain, for many reasons that you can read online

Let's compare the LangChain document example for Multiple Chains to the equivalent Sophia implementation.

LangChain

import { PromptTemplate } from "@langchain/core/prompts";
import { RunnableSequence } from "@langchain/core/runnables";
import { StringOutputParser } from "@langchain/core/output_parsers";
import { ChatAnthropic } from "@langchain/anthropic";

const prompt1 = PromptTemplate.fromTemplate(
  `What is the city {person} is from? Only respond with the name of the city.`
);
const prompt2 = PromptTemplate.fromTemplate(
  `What country is the city {city} in? Respond in {language}.`
);

const model = new ChatAnthropic({});

const chain = prompt1.pipe(model).pipe(new StringOutputParser());

const combinedChain = RunnableSequence.from([
  {
    city: chain,
    language: (input) => input.language,
  },
  prompt2,
  model,
  new StringOutputParser(),
]);

const result = await combinedChain.invoke({
  person: "Obama",
  language: "German",
});

console.log(result);

Sophia

import { llms } from '#agent/context'
import { anthropicLLMs } from '#llms/anthropic'

const prompt1 = (person: string) => `What is the city ${person} is from? Only respond with the name of the city.`;
const prompt2 = (city: string, language: string) => `What country is the city ${city} in? Respond in ${language}.`;

runAgentWorkflow({ llms: anthropicLLMs() }, async () => {
  const city = await llms().easy.generateText(prompt1('Obama'));
  const result = await llms().easy.generateText(prompt2(city, 'German'));

  console.log(result);
});

The Sophia code also has the advantage of static typing with the prompt arguments, enabling you to refactor with ease. Using simple control flow allows easy debugging with breakpoints/logging.

To run a fully autonomous agent:

startAgent({
  agentName: 'Create ollama',
  initialPrompt: 'Research how to use ollama using node.js and create a new implementation under the llm folder. Look at a couple of the other files in that folder for the style which must be followed',
  functions: [FileSystem, Perplexity, CodeEditinAgent],
  llms,
});

Automated LLM function schemas

LLM function calling schema are automatically generated by having the @func decorator on class methods, avoiding the definition duplication using zod or JSON.

@funcClass(__filename)
export class Jira {
    instance: AxiosInstance | undefined;

    /**
     * Gets the description of a JIRA issue
     * @param {string} issueId - the issue id (e.g. XYZ-123)
     * @returns {Promise<string>} the issue description
     */
    @func()
    async getJiraDescription(issueId: string): Promise<string> {
        if (!issueId) throw new Error('issueId is required');
        const response = await this.axios().get(`issue/${issueId}`);
        return response.data.fields.description;
    }
}

Contributing

We warmly welcome contributions to the project through issues, pull requests or discussions