April 16, 2026

Most discussions about artificial intelligence (AI) in the corporate sphere paint it as a distant, neatly packaged revolution. However, the reality is far more chaotic and immediate. AI has infiltrated your company, not through formal introductions but via your employees' daily tools and decisions. It's controlling aspects of your business whether you realize it or not.
In a thought-provoking discussion with Heath Morgan, an in-house attorney and speculative fiction author, the blurred lines between fiction and corporate reality are strikingly clear. His book, "The Memory Project," imagines a world where digital personas interact with the living, a concept not too distant from current corporate practices where AI tools accumulate vast amounts of operational data and preferences.
Morgan emphasizes a crucial point: "The question is not whether your employees are using AI, but whether they are using it with intention." This unintentional integration of AI can be a fertile ground for risk but also a breeding ground for cultural evolution within a company.
Drawing parallels to the past, Morgan refers to the social media savvy 'latchkey kids' generation, who grew up with little oversight on their digital activities. Similarly, today's employees are adopting AI tools independently, driven by the allure of convenience and efficiency, often overlooking potential risks. These tools, while promising to save time, seldom address the long-term implications on organizational memory and data security.
As AI tools learn and adapt from every interaction, they are inadvertently creating a 'corporate memory.' This memory, composed of all the data scraps and user interactions, becomes almost impossible to erase or retrieve once it's stored outside your immediate control. This evolving memory can influence everything from IP strategy and confidentiality to compliance and corporate culture.
The essence of the challenge lies in the legacy these tools leave behind. Every unchecked adoption of AI technology is a step towards a future where the organization's data and operational ethos are dictated not by deliberate decisions but by ad hoc adaptations.
For in-house counsel, the wake-up call is clear. The governance of AI cannot start from a blank slate but must address the ingrained practices already present. This requires a proactive approach that involves mapping out existing AI use, understanding its implications, and integrating strategic governance that aligns with both current use and future risks.
The task ahead is not simply regulatory but fundamentally transformative. It involves reshaping how AI is perceived and managed, turning what could be a scattered set of tools into a coherent framework that supports the company's long-term objectives and ethical standards.
In conclusion, while AI's stealthy creep into corporate systems may seem benign, its potential to shape future operations, risk profiles, and even corporate legacy is immense. Recognizing and steering this unintentional adoption now is crucial to ensuring that the AI legacy you leave is not only intentional but beneficial.