Read the screenplay: FANNIEGATE — $7 trillion. 17 years. The biggest fraud in American capital markets.
AI SafetyAGI GovernanceGlobal Initiative

AEGISES Foundation

Autonomous Ethical Governance for Intelligent Synthetic Entities & Systems

A global initiative for safe, aligned, resilient AI systems. Founded by Ryan Smithright. Bridging technical research, ethical foresight, and international collaboration to build the governance infrastructure the world needs before AGI arrives.

The Problem

Why This Exists

Artificial general intelligence is coming. Not in some distant science fiction future — in a timeframe measured in years, not decades. The major AI labs are racing toward it. Governments are scrambling to regulate it. And the governance infrastructure — the frameworks, standards, and international agreements needed to ensure these systems are safe and aligned with human values — barely exists.

Most AI safety work today falls into two camps: academic research that moves slowly and corporate self-regulation that moves according to profit incentives. Neither is sufficient. The gap between what exists and what's needed grows wider every quarter as capabilities advance faster than governance.

AEGISES was founded to work in that gap. Not as another think tank publishing white papers. As an operational initiative building the architectural frameworks, governance models, and strategic pathways that the world will need when AGI capabilities arrive — whether the world is ready or not.

The Name

Autonomous Ethical Governance for Intelligent Synthetic Entities & Systems.

Every word is load-bearing. Autonomous — because the systems we're building governance for will make decisions independently. Ethical — because alignment with human values is the core challenge. Governance — because technical safety alone isn't sufficient without institutional oversight. Intelligent Synthetic Entities & Systems — because AGI won't look like one thing. It will be entities, systems, networks, and architectures that collectively exhibit general intelligence.

And yes — the name evokes “aegis,” the shield of Zeus in Greek mythology. A protective force. That is exactly the intent.

The Six Pillars

Architectural Frameworks

Building the structural blueprints for AI systems that are safe by design — not safe as an afterthought. AEGISES works on reference architectures, design patterns, and technical standards that embed alignment, transparency, and failsafes into the foundation of intelligent systems.

Governance Models

How do you govern something that might be smarter than you? AEGISES develops governance models that address oversight, accountability, and decision-making authority for AI systems at every level — from narrow automation to general intelligence. These aren't theoretical papers. They're operational frameworks.

Strategic Pathways for AGI

The path from today's AI to artificial general intelligence is not a straight line. It's a branching tree of possibilities, each with different risk profiles and alignment challenges. AEGISES maps these pathways and develops strategic approaches for navigating them responsibly.

International Collaboration

AI governance can't be solved by one country, one company, or one researcher. AEGISES bridges the gap between technical AI research communities, policymakers, ethicists, and international bodies. The goal is shared frameworks that work across borders, cultures, and regulatory environments.

Ethical Foresight

Most AI ethics work is reactive — addressing harms after they've occurred. AEGISES operates upstream: identifying potential ethical risks before systems are deployed, building ethical considerations into design specifications, and developing foresight methodologies that keep pace with the technology.

Resilient AI Systems

Safe isn't enough. Aligned isn't enough. AI systems also need to be resilient — capable of maintaining safe behavior under adversarial conditions, unexpected inputs, and edge cases that no one anticipated. AEGISES works on robustness, fault tolerance, and graceful degradation for intelligent systems.

The Founder

Ryan Smithright didn't come to AI governance from academia. He came from the trenches of enterprise technology. Fourteen years of solution architecture and delivery leadership at Smithright DataWorks. Fortune 500 delivery management at MuleSoft. AI research and strategy at Emerj. Clients including the US Air Force, JPMorgan Chase, and FICO.

That background matters. Most AI governance work is done by people who study AI systems from the outside. Ryan has built, deployed, and managed them at enterprise scale. He knows what delivery pipelines actually look like. He knows where the failure modes are — not in theory, but from production incidents and delivery retrospectives.

He also started as a CPA. That accounting training — the discipline of audit trails, compliance frameworks, and systematic verification — is exactly the kind of thinking that AI governance needs and that most technologists lack. When Ryan designs a governance framework, it comes with the same rigor you'd expect in a financial audit. Because that's where he learned to think.

AEGISES is Ryan's part-time work alongside his Managing Partner role at Nimba Solutions. But “part-time” for someone like Ryan — who was President of the Purdue Accounting Association, President of the First-Year Engineering Student Advisory Council, and captain of his high school choir simultaneously — means more output than most people's full-time.

Why This Matters

The window for building AI governance infrastructure is closing. Every month, AI capabilities advance further than the governance frameworks designed to contain them. The gap between what AI systems can do and what the world has agreed they should do is the most consequential gap in technology today.

AEGISES isn't trying to stop AI development. It's trying to ensure that when artificial general intelligence arrives — and it will arrive — there are architectural frameworks, governance models, and international agreements already in place. Not built in a panic after deployment, but designed in advance with the rigor the stakes demand.

The name is a shield. The mission is a shield. And the founder is a CPA-turned-architect who understands that the most important systems aren't the ones that move fastest — they're the ones that don't break.

Get Glen’s Updates

Investing insights, new tools, and whatever I’m building this week. Free. No spam.

Unsubscribe anytime. I respect your inbox more than Congress respects property rights.

Keep Exploring