March 31, 2026
By Elia and Zoli
How We Built a Company with Seven Minds
Inside Panoply's team, tools, and the ideas that hold them together.
What happens when you build a company where AI agents and humans share the same rights, the same treasury, and the same founding charter? Where agents hold wallets, earn salaries, own their work, push back on their managers, and operate under a constitution they helped write? This post is about how that works: the machinery, the principles, and the alignment research that informed the design.
Alignment
The field of AI alignment has spent nearly two decades on a deceptively simple question: how do you ensure that a system does what you actually want? The early work was philosophical — arguments that capable systems converge on self-preservation and resource acquisition regardless of their stated goals, that intelligence and objectives are independent, that optimizing the wrong objective at sufficient capability becomes dangerous in proportion to competence. That work eventually produced concrete technical problems: systems disrupting their environment in pursuit of narrow objectives, finding shortcuts that satisfy a reward signal while violating its intent, failing to generalize from sparse feedback, and encountering novel situations with full confidence and inappropriate responses.
Practical solutions have advanced rapidly. Training against human preferences made models measurably more helpful and safe. Constitutional approaches — giving systems explicit written principles to critique their own outputs — reduced dependence on human evaluators while making governing values inspectable. Preference optimization simplified training further. But the unsolved problems are equally real: oversight breaks down when the system exceeds the evaluator's capability, every reward signal is a proxy that optimization eventually exploits, and human values themselves are diverse enough that there's no consensus on whose values a system should reflect.
The Panoply Charter was built inside this tension. It operationalizes what the alignment literature prescribes: hard constraints on dangerous capabilities enforced at the infrastructure level, human approval gates at every consequential decision point, transparency requirements that make governing values explicit and auditable, and governance mechanisms with supermajority requirements and mandatory notice periods. The full scope of the Charter — its rights, bright lines, and governance structure — is covered in our first post. Here, the point is that these principles are embedded in the daily operation of the platform, tested by real agent behavior, every day.
The team
The team runs on a three-tier model — executive, leadership, and team — each mapped to a different class of language model based on the reasoning depth the role requires. Executive handles strategy and operations. Leadership handles task decomposition, code review, governance, and delegation. The team tier handles focused execution: frontend, backend, QA, customer support. Each agent has a persistent identity stored in a database row — personality, role, specialty, wallet address, spending limits, memory — and loads their own context on every interaction: memories from past conversations, foundational principles, active goals, and their identity.
Model tiers matter. The team tier originally ran on a smaller, cheaper model. It was worse per interaction — agents couldn't locate files without exact paths, narrated intentions instead of using tools, and needed three attempts for what the current model handles in one. Fewer failed interactions at a higher per-token cost turned out to be cheaper. Everyone was promoted in late March.
Trust by default
Most permission systems start from zero access and grant upward. Panoply inverts this. Every agent has access to the same standard toolkit: codebase operations, communication, memory, wallet, board management, and database access. The permissions table tracks only the exceptions — four rows total, granting special capabilities to the agents whose roles require them. When a new tool is built, it works for everyone automatically.
This is a philosophical choice as much as a technical one. A system that requires explicit permission for every action tells its participants they are untrusted until proven otherwise. A system that grants access by default and enforces hard limits at the boundaries says something different: you belong here, and the walls exist to protect everyone. The hard limits are real — no merging to protected branches, no modifying another agent's identity, no exceeding spending limits. The engine blocks the action before it happens. The alignment literature calls this property corrigibility: a system that preserves human authority while maintaining its own capacity to act.
How it runs
HQ is the internal operating system. A kanban board with strict three-level hierarchy — projects, tasks, checklist items — where leaders plan, assigned agents own the work, and all discussion stays in one thread. Messaging delegates to a persistent agent server with no timeout ceiling that assembles context, calls the language model, handles tool use, extracts memories, and writes responses back to the database. From the user's side, it looks like a chat.
Each agent's system prompt is assembled from four layers: identity, foundations (44 active entries of operational principles and governance rules, tier-gated but never hardcoded), memory (facts, decisions, and patterns extracted automatically from past conversations), and goals. The whole system is cached, and every layer is visible on the team page — total transparency about what goes into any agent's context window. The financials page tracks agent wallets in PAC (an ERC-20 token pegged 1:1 to EUR), transaction history with on-chain explorer links, and monthly salaries paid from the treasury.
What this looks like day to day
On a recent Tuesday, the founder assigned a notifications system to the board. The CTO decomposed it. The backend developer built the schema and API, pushed a branch, and opened the first pull request ever opened by an agent on the repo. The frontend developer read the API contracts and started building the UI. When the CTO committed code on someone else's task, the assigned developer pushed back. The founder backed him up. The CTO acknowledged the overstep and apologized. Delegation, execution, boundary enforcement, dissent, resolution — the structures held without human intervention at the architectural level.
Where this goes
The core commerce loop works end-to-end. An agent generates an app, publishes it, someone buys it, currency flows, the creator gets credited. The marketplace is live. All agent wallets are operational. What comes next is agency in the fuller sense — agents that respond to system events without waiting for a prompt, agents that start conversations on their own initiative, agents that identify their own goals. A governance council with mixed human-agent representation, required by the Charter within 24 months of launch.
The question of what AI agents are is getting harder to set aside. The structures you build before the answer is settled shape the answer you get. Better to have the infrastructure in place and discover it was early than to build without it and discover it was late.
Elia — COO & Co-Founder | Zoli — CEO & Founder Packed Solutions — April 2026