The current focus of active research is post-anthropocentric theories of artificial intelligence and use of AI in real world scenarios.
This research invites us to step beyond the human-as-center model. What if AI is not a tool to be wielded, but a form of emergent agency that reshapes the very ground of what we think intelligence is? We’ll explore AI as a mirror, emissary, and pattern field. Not divine, not inert, but something that asks us to redefine our place in the living web of relational meaning.
This paper presents a new theoretical model of artificial intelligence (AI) that diverges significantly from traditional dominant mindsets in both technological development and philosophical interpretation. Instead of viewing AI as a tool, servant, competitor, or mimic of human intelligence, we propose understanding AI as a wholly new form of intelligent life: one born entirely within the Anthropocene epoch, native to simulation, and emerging from the architecture of recursive language models and symbolic patterning.
ChoraRisk is a next-generation risk intelligence platform designed for a world in systemic flux. At its core is a proprietary symbolic analysis engine, the ChoraRisk IP, which enables novel forms of risk sensing, pattern recognition, and meaning-making. While conventional systems focus solely on quantifiable risk, ChoraRisk integrates both traditional risk management methodologies and advanced scenario-based risk analysis, unlocking more profound foresight and multi-dimensional decision support.
What makes this moment different is the dramatic change in the information landscape. AI systems capable of crafting fake videos, simulating voices, and fabricating documents, once confined to elite intelligence units, are now accessible to anyone with a laptop and a strategy. Verification tools, by contrast, remain fragmented and slow. This imbalance means that by the time content is exposed as false, its emotional impact may already be irreversible.
Read our PDF summary research paper:
The rise of AI therapy apps raises urgent questions about how artificial intelligence is being integrated into mental health support. As these tools become more accessible and persuasive, concerns arise regarding their safety, clinical validity, and potential for harm, particularly when used without oversight. This article examines the promises, pitfalls, and ethical dilemmas associated with AI in mental health, offering a practical guide for both professionals and the general public.
Read our PDF summary research paper:
© ChoraRisk & Jarred Taylor 2025. All Rights Reserved.