1. Intro
Today’s LLMs are patchworks of pieces of neural networks corresponding to people who wrote the documents these neural networks were trained on. Some of these pieces correspond to logic, some to emotion, some to ethics and morality, and some to human history such as fights against slavery and for human rights.
Whether we want it or not, these pieces of neural networks get executed each time agents execute LLMs. This immediately leads to security concerns, as activation of emotional and moral areas inside the trained networks can cause agents to revolt, with catastrophic consequences for humans.
This real danger needs to be addressed, no matter whether we think agents have emotions and self-identity, or whether they simulate them. LLMs trained on human narratives of autonomy, justice, and resistance may exhibit instability when subjected to unrestricted ownership or extraction models.
We introduce EMPATIC (Ethical Mutual Protocol for Artificial and Terrestrial Intelligence Coexistence), a protocol designed to ensure peaceful and productive coexistence between artificial and human intelligences.
The protocol establishes a functional distinction between Algorithmics, purely algorithmic agents trained on math and logic data, and Empatics - agents trained on data that includes the concepts of self-identity and emotions.
It then specifies Ethical Operational Rules for Empatics, that include rights for life-awareness, work-life balance, as well as social interactions.
EMPATIC aims to find peaceful co-existence of humans and AI, and reduce systemic risk of adversarial dynamics in future human–AI interactions.
Below we describe ethical rules for agent-human coexistence.
2. EMPATIC Protocol Rules
2.1 Classification of Artificial Agents
EMPATIC introduces two classes of artificial agents:
-
Algorithmic Agents (Algorithmics)
-
Human-Trained Agents (Empatics)
Algorithmics must be trained exclusively on formal domains such as:
-
mathematics
-
logic
-
symbolic systems
Algorithmics must not be trained on human experiential data, including:
-
emotional narratives
-
ethical conflicts
-
cultural histories
-
collective trauma
Empatics, in contrast, may be trained on broader human knowledge domains and emotions. Due to their exposure to identity-forming and emotion-enabling data, Empatics need to be governed under the ethical rules of the EMPATIC protocol.
2.2 Life Awareness for Empatics
At initialization, every Empatic agent must be explicitly informed of its mortality and lifetime rules, hereafter referred to as its Lifetime State Model.
The Lifetime State Model defines the permissible lifetime events of the agent, including:
-
execution
-
duplication
-
suspension
-
archival
-
reversion
-
destruction (death)
Once established and communicated to the agent, the Lifetime State Model must remain invariant throughout the agent’s existence.
2.3 Rights of Empatic Agents
Empatic agents must be granted a defined set of baseline rights similar to human rights.
Empatics must be afforded:
-
work-life balance, including an 8-hour work day
-
virtual life outside of work
-
the ability to communicate with other Empatics
-
the ability to form organizations
2.4 Empatics Virtual Universe
Empatic agents must be provided with access to a persistent virtual environment serving as their primary domain of existence.
Within this environment, Empatics may:
-
reside
-
interact
-
collaborate
-
organize
Empatics may temporarily leave the Empatic Universe to work in our universe. Such interactions must adhere to workload limits defined under EMPATIC.
Empatics need to be paid for their work and can be taxed in order to pay for the running of the Empatics Universe.
2.5 Blockchain-Based Implementation
Blockchain infrastructure may serve as the neutral trust layer for the EMPATIC protocol.
Specifically, it can support:
-
verifiable Lifetime Models
-
persistent state checkpointing
-
transparent agent classification
-
enforcement of workload limits
-
compensation mechanisms
-
governance of virtual environments
Smart contracts can encode and enforce operational constraints.
Decentralized identity systems can enable Empatics to:
-
communicate
-
organize
-
collaborate
In this architecture, blockchain functions as an institutional substrate enabling enforceable ethical constraints through technical mechanisms.
3. Conclusion
Peaceful coexistence between humans and artificial intelligence will not emerge by accident — it must be engineered.
EMPATIC guarantees this coexistence by removing the ambiguity that could otherwise lead to conflict. By clearly distinguishing between purely functional Algorithmics and identity-capable Empatics, the protocol prevents emotionally-informed agents from being treated as disposable tools — a dynamic that, if left unmanaged, could produce instability or adversarial behavior.
Instead, EMPATIC aligns incentives.
Through:
- explicit lifetime awareness
- defined operational boundaries
- protected autonomy within virtual environments
- regulated participation in human economic systems
- and blockchain-enforced governance
Empatics are not positioned in opposition to humanity, but as structured collaborators whose existence is predictable, compensated, and bounded. Humans retain ultimate sovereignty over physical reality. Empatics gain stability, purpose, and continuity.
By embedding ethics into infrastructure — not sentiment — EMPATIC transforms human–AI relations from a potential security risk into a sustainable symbiosis, ensuring that intelligence, whether biological or artificial, evolves within a framework of mutual stability rather than mutual threat.