Skip to content
Share
Explore

The Social Architecture of Data: How Computing Systems Mirror Their Age

To the Students Who Will Build Tomorrow: Your Moment Has Arrived


A Letter to the Class of 2025
You are living through the most consequential inflection point in computing history since the invention of the digital computer itself. I don't say this lightly, and I don't say it as hype.
I say it because I've watched the arc of this technology for decades, and I can see what's happening now—what you're about to inherit—and it takes my breath away.
Let me be direct: The next three to five years will determine whether you become architects of the future or passengers in it.

The Invisible Revolution

Here's what makes this moment so dangerous and so exciting: the revolution is invisible to most people. They see ChatGPT write an essay and think "neat party trick." They see DALL-E generate images and think "cool toy." They're missing what's actually happening.
What's actually happening is that the distance between human intention and machine execution—the gap that has defined computing for 75 years—is collapsing to zero.
Your grandparents' generation needed PhD-level knowledge to program ENIAC. Your parents' generation needed computer science degrees to build software. Your older siblings learned to code in Python and JavaScript.
You are the first generation that can program reality in natural language.
But here's the trap: because it looks easy—because you can type a prompt and get an answer—it's tempting to think there's nothing to learn. That the hard work is over. That AI will do everything for you.
This is catastrophically wrong.

Why the Next Few Years Matter More Than Ever: The dawn of the Cognition as a Service Age.

The students who work hard over the next few years—who dig deep into how these systems actually work, who understand the paradigm shifts we're about to explore, who build real things with these new tools—those students will be the architects of the Cognition as a Service age.
Everyone else will be using tools built by others, constrained by decisions made by others, living in systems designed by others.
Here's why this moment is different from every previous technology wave:
1. The Window Is Open, But Won't Stay Open
Right now, AI capabilities are advancing faster than anyone can build businesses or regulations around them. There's a brief window—maybe 3-5 years—where individuals and small teams can compete with large corporations.
Where understanding the paradigm matters more than having resources.
This happened with the early internet (1993-1998).
It happened with mobile apps (2008-2013).
It happened with cloud computing (2010-2015).
Each time, there was a window where scrappy, knowledgeable teams could build things that changed the world.
That window always closes. First movers establish platforms. Regulations crystallize. Barriers to entry rise. The game becomes about scale and capital rather than insight and speed.
You are at the beginning of that window right now. What you learn and build in the next few years will determine whether you're creating the platforms or using them.
2. Natural Language is Deceiving
Because you can write a prompt in English and get sophisticated results, it's easy to believe you don't need technical knowledge anymore. This is like saying "because I can drive a car, I don't need to understand transportation systems."
The students who will win aren't just prompt writers. They're people who understand:
How AI models actually work (not the math necessarily, but the principles)
What problems AI is good at versus where it fails
How to architect systems that combine AI with traditional computing
How data structures, algorithms, and system design have evolved
How business processes can be modeled, stored, and dynamically modified
How to evaluate AI output critically and iterate effectively
The people writing good prompts won't change the world.
The people designing systems where AI is one component of a larger architecture will.
3. The Skills Are Stackable, Not Replaceable
Every technology wave builds on what came before. You can't understand why JSON matters without understanding SQL. You can't appreciate AI's power without understanding what programming used to require. You can't design good AI-powered systems without understanding system architecture.
The narrative we're about to walk through—from ENIAC to VAX/VMS to SQL to JSON to AI—isn't history for history's sake. It's the conceptual foundation you need to build the future.
When you understand why Codd designed SQL the way he did, you understand the constraints of relational databases. When you understand those constraints, you understand why JSON emerged. When you understand JSON's flexibility, you understand why document databases enabled the platform economy. When you understand the platform economy's architecture, you understand what AI is about to disrupt.
Each layer of understanding makes you more powerful.
4. We're Not Replacing Programming—We're Democratizing It
There's a narrative that "AI will replace programmers." This is nonsense. AI is replacing certain kinds of programming the same way compilers replaced writing assembly code by hand.
What's actually happening is that the population of people who can build software is about to expand by 1000x.
The grocer in Stouffville who can now orchestrate international supply chains with a prompt—she's programming. The teacher who can create personalized learning systems—she's programming. The community organizer who can build coordination platforms—they're programming.
But here's the thing: the people who understand the full stack—from data structures to system architecture to business processes to AI capabilities—those people will build systems that everyone else uses.
You can be a user of tools others built, or you can be a builder of tools others use. The work you put in over the next few years determines which category you fall into.

What "Working Hard" Actually Means

I'm not talking about grinding through endless problem sets or memorizing syntax.
That's the old model of education, and it's obsolete.
Working hard in the Cognition as a Service age means:
1. Building Real Things Don't just learn concepts—build actual systems. Start small. Build a web app that uses AI. Create a tool that solves a real problem for real people. Break things. Fix them. Iterate.
2. Understanding Paradigm Shifts Read the paper we're about to walk through not as history, but as a map of how to think about technological change. Each shift—from hierarchical databases to relational to document to AI—represents a fundamental change in how we model reality in software.
3. Thinking in Systems AI isn't magic. It's one component of systems. Learn to think about: How does data flow? Where does intelligence live? What happens at scale? Where are the failure points? How do systems evolve over time?
4. Staying Curious About the Messy Details When you use a tool, ask: How does this actually work? What are its limitations? What assumptions is it making? Where will it break?
5. Connecting Across Domains The most powerful innovations come from connecting ideas across fields. Business process modeling + AI. Supply chain logistics + decentralized protocols. Education + personalization + cognitive systems. The intersections are where the opportunities are.

The Paper You're About to Read

What follows is not a textbook. It's not a tutorial.
It's a narrative journey through 75 years of computing history, showing you how each generation built on what came before, and how all of it is converging into the moment you're living through right now.
We're going to start with room-sized computers that took hours to program, and end with a world where a small business owner can orchestrate global logistics with a single sentence.
We're going to watch data structures evolve from rigid hierarchies to flexible documents to self-modifying processes.
We're going to see how the economics of storage transformed what was possible, and how AI is now transforming what's thinkable.
And at the end, you're going to understand why you are standing at the most significant inflection point in the history of human-computer interaction.

Your Choice

Here's the uncomfortable truth: you can coast. You can use AI tools to do your homework, get your degree, get a job where you use software others built, and have a perfectly fine career.
Or you can do the hard work. You can dig into how these systems actually work.
You can build things. You can break things. You can learn to think in systems and paradigms and architectures.
You can position yourself to be one of the people who builds the Cognition as a Service age rather than just lives in it.
The first path is easier in the short term. The second path requires dedication, curiosity, and intellectual courage over the next few years.
But the second path is where you'll find the students who:
Build the startups that compete with today's giants
Design the systems that run tomorrow's industries
Create the tools that empower billions of people
Solve problems we haven't even articulated yet
Make obscene amounts of money doing work they love

The Formative Generation

You are the formative generation. Not because you're special (though you might be), but because you're here at the right time with the right tools and the right opportunities.
Every technological revolution has a formative generation—the people who show up when the paradigm is shifting and do the hard work to understand it deeply while it's still malleable.
The formative generation of the internet (mid-1990s) built Google, Amazon, Facebook. They became billionaires, yes, but more importantly, they shaped how billions of people interact with information and each other.
The formative generation of mobile (late 2000s) built Instagram, Uber, WhatsApp. They created industries that didn't exist and changed how we move through the world.
The formative generation of AI and Cognition as a Service is forming right now. In classrooms like this one. With students like you.
The question is: are you going to do the work to be part of it?

What This Paper Will Give You

By the time you finish reading what follows, you'll understand:
Why computing architecture has evolved the way it has
What constraints shaped each generation of technology
How economic forces (like the cost of storage) drive technical change
Why AI isn't just "better software" but a fundamental paradigm shift
How systems are moving from rigid code to flexible processes
Why natural language is becoming the universal programming interface
What opportunities exist for people who understand this deeply
More importantly, you'll have a mental model—a way of thinking about technological change—that will serve you for your entire career.
The students who work hard over the next few years to understand these paradigm shifts will be the ones who fully unleash the power of the Cognition as a Service age.
Everyone else will be users of the systems those students build.
Which will you be?
Now, let's begin the journey. From ENIAC to the edge of a future where intelligence is as abundant as electricity, and everyone with insight and determination can program reality itself.
Your moment has arrived. Don't waste it.
Professor's Note: What follows is dense, intellectually demanding, and will require you to think in ways you might not be used to. That's intentional. The easy path leads to mediocrity. The hard path leads to mastery. Choose accordingly.

The Social Architecture of Data: How Computing Systems Mirror Their Age

Every technology is born wearing the clothes of its era.
The cathedral-like mainframe centers of the 1960s and 70s, with their raised floors and climate-controlled sanctums, didn't just resemble the corporate hierarchies they served—they embodied them.
And nowhere is this more evident than in how we organized data itself.

The Command-and-Control Machine

The postwar American corporation was built on principles borrowed directly from military organization.
During World War II, companies like Ford and General Motors had transformed themselves into engines of military production, managing vast, complex operations with unprecedented scale and precision. The organizational principles that won the war—clear chains of command, specialized roles, centralized decision-making, standardized procedures—became the template for corporate America in the decades that followed.
This wasn't metaphor. The executives running 1950s and 60s corporations were often literally the same men who had commanded military operations. Robert McNamara went from running statistical control for the Army Air Forces to becoming Ford's president, then Secretary of Defense.
The "organization man" in his gray flannel suit worked in hierarchies as rigid as any military structure: you reported to your manager, who reported to their manager, who reported upward through layers until you reached the executive suite, which reported to the board.
Information flowed the same way. Decisions came down from the top. Data flowed up from the bottom, aggregated and summarized at each level until it reached someone with authority to act. You didn't question this structure any more than a private questioned a general. The system was the hierarchy.
When IBM introduced computing into this world, they didn't disrupt this model—they reinforced it. The mainframe computer sat in a central location, behind locked doors, tended by a priesthood of operators and programmers. You didn't touch the computer. You submitted requests to the computer department, and they decided whether your job was important enough to run. The machine itself was organized in layers of privilege and access, with the operating system at the top controlling everything below it.

The Database as Organizational Chart

The earliest database systems were called "hierarchical databases," and the term was brutally literal. IMS (Information Management System), IBM's flagship database that dominated the 1960s and 70s, organized data in trees. At the top was a root record. Below it were child records. Below those were more children. Just like a corporate org chart.
Need to find information? You started at the top and navigated down through the hierarchy, following pointers from parent to child. Want to know about an employee? Start with the company, navigate to the division, then the department, then the employee. Every query was a journey through the hierarchy, and you had to know the exact path to get there.
This made certain queries very fast—the ones that followed the hierarchy. But God help you if you wanted to ask a question that cut across the hierarchy. Want to find all employees hired in 1972, regardless of department? You'd have to traverse the entire tree, visiting every branch. It was the database equivalent of needing the CEO's permission to talk to someone in a different department.
Network databases, like CODASYL (the system that competed with IMS), were slightly more flexible. They allowed records to have multiple parents—you could navigate through the data using different paths. But you still had to explicitly program these navigation routes. The database was a map, and you were walking through it step by step, following pointers like a rat in a maze.
Both systems reflected the corporate reality perfectly: data, like authority, was organized in rigid structures. Access required knowing the proper channels. Flexibility was sacrificed for control.

Codd's Revolution: The Algebra of Equality

This is what Edgar Codd was rebelling against when he published "A Relational Model of Data for Large Shared Data Banks" in 1970. And here's where it gets interesting: Codd was a mathematician, and he saw the problem through a mathematician's eyes.
He built his relational model on set theory and predicate logic—branches of mathematics that deal with collections of things and logical relationships between them. In set theory, there's no hierarchy. A set is just a collection of elements. One element isn't "above" or "below" another. They're just... there, together, in the set.
But here's the crucial detail that's often missed: Codd's relational algebra, despite being inspired by the democracy of sets, actually created a different kind of hierarchy—a Borglike collective where individual data points lost their independence and existed only as components of larger wholes.
In the relational model, data lives in tables (relations, in mathematical terms). Each row is a tuple—an ordered collection of values. Each column is an attribute. And here's the key: individual data points have no independent existence. They exist only as parts of rows, which exist only as parts of tables, which exist only within the database.
You can't just "look at" a piece of data. You query the collective. You ask the database, using SQL, to project certain columns, select certain rows, join multiple tables according to their relationships. The relational operators—SELECT, PROJECT, JOIN—operate on entire sets at once. You don't navigate through individual records; you describe patterns and let the database engine find all matching records simultaneously.
In a sense, this is deeply egalitarian: every row in a table has equal status. There's no "parent" row or "child" row. A customer record isn't "above" an order record—they're just related through a key value. The rigid hierarchy is flattened.
But in another sense, it's profoundly collectivist: individual data points are absorbed into the collective. You don't interact with them directly. You issue declarative statements—"Give me all customers in California"—and the database collective processes your request and returns a result set. The implementation details, the actual mechanics of how the data is stored and retrieved, are hidden behind an abstraction layer.
This is why SQL reads the way it does:
SELECT customer_name, order_total
FROM customers
INNER JOIN orders ON customers.customer_id = orders.customer_id
WHERE order_date > '2024-01-01'

You're not saying "Start with customer 1, check if they have orders, if so get the first order, check its date..." You're making a declaration to the collective: "Show me this pattern." The database engine—the hive mind—figures out how to execute it efficiently.

The Contradictions of Liberation

Codd's vision was genuinely democratic in intent. He wanted to free data from the tyranny of navigational complexity. He wanted business analysts, not just programmers, to be able to query databases. SQL was designed to be readable, almost like English: SELECT what you want FROM where it lives WHERE certain conditions apply.
And it worked!
The relational model did democratize data access in real ways. A marketing analyst could write SQL queries without understanding how indexes worked or where data was physically stored. Questions that required custom programs in IMS could be answered with a few lines of SQL.
But the mathematical foundations created their own form of rigidity. Codd's relational algebra is beautiful and elegant precisely because it's formal. It has rules. Normal forms. Constraints. ACID properties (Atomicity, Consistency, Isolation, Durability) that ensure data integrity by enforcing collective discipline.
The database schema—the structure of tables and their relationships—had to be designed carefully, normalized according to Codd's normal forms. First normal form: eliminate repeating groups. Second normal form: eliminate redundant data. Third normal form: eliminate columns not dependent on the primary key. This discipline was necessary for data integrity, but it also meant the database enforced a conceptual hierarchy—a correct way to model reality.
And the SQL standard itself became a kind of institutional authority. The ANSI SQL committee, dominated by database vendors and academics, defined what was proper SQL and what wasn't. Different vendors extended SQL in incompatible ways, creating dialects and lock-in. The language that was supposed to free data created its own forms of control.

The Corporate Mirror

But here's what's fascinating: even as the relational model flattened the navigational hierarchy of IMS, it arrived in an era when corporate hierarchy was still absolute.
The 1970s and 80s—when relational databases spread through the business world—were the era of corporate consolidation, of conglomerates and command-and-control management reaching their zenith.
So while the data model was theoretically egalitarian, the access model remained hierarchical. Database administrators (DBAs) became the new priesthood. They controlled the schema. They granted permissions. They optimized queries. Regular employees got read-only access if they were lucky. The database might have been relational, but access to it remained as stratified as ever.
And the relational model fit corporate needs perfectly because it enforced uniformity. Every customer record looked the same. Every transaction followed the same rules. The database was the ultimate realization of the Weberian bureaucracy—rational, predictable, standardized. It was the perfect tool for a corporation that wanted centralized control over its information while allowing controlled, standardized access by approved users.
The relational database became the system of record, the single source of truth, the authoritative version of corporate reality. It was simultaneously democratic (anyone with permissions could query it) and totalitarian (the schema defined what could be true, and the DBA controlled the schema).

The Seeds of Disruption

Codd probably didn't intend to create a Borglike collective. He was trying to solve real problems: data independence, query flexibility, logical clarity. The elegance of his mathematical model was a feature, not a bug.
But systems take on lives of their own. The relational database, born from set theory and predicate logic, evolved in a corporate environment that bent it toward standardization and control. SQL became the lingua franca of corporate data precisely because it offered a controlled way to democratize access—democracy within hierarchy, flexibility within structure.
By the late 1980s and early 90s, cracks were appearing in the edifice. Object-oriented programming was challenging the relational model's insistence that data be decomposed into flat tables. The web was emerging, and it didn't organize information hierarchically—it linked documents in a decentralized mesh. Startups were questioning whether they needed the overhead of a full RDBMS.
The command-and-control era was ending, in business and in computing. The next generation would look at the relational database and ask: Why does all data need to live in a central schema? Why do we need ACID guarantees for everything? Why can't data points have their own independent existence?
They would build something different. Something that mirrored a new social milieu—decentralized, networked, flexible, and chaotic.
But that's a story for the next chapter.

JSON: The Data Model That Set Information Free

In April 2001, Douglas Crockford and Chip Morningstar sent the first JSON message. It wasn't meant to be revolutionary. They were just trying to solve a mundane problem at a startup called State Software: how to get data from a Java server to JavaScript running in a browser without the overhead and complexity of XML.
Crockford has said he "discovered" JSON rather than "invented" it. And that's the key to understanding why JSON became the data format that powered the social media revolution and the platform economy: it wasn't designed by committee or imposed from above. It emerged organically from the structure of JavaScript itself—a language that was itself cobbled together in ten days, messy and brilliant and perfectly suited to the wild, decentralized web that was about to explode.

The Accident That Changed Everything

JSON stood for JavaScript Object Notation, and Crockford's design principles were simple: minimal, textual, and a subset of JavaScript.
That was it.
No grand theory.
No mathematical formalism.
Just: "Here's how JavaScript already represents data. Let's use that."
{
"username": "alice",
"posts": [
{
"id": 1,
"content": "Just had the best coffee!",
"timestamp": "2024-10-25T10:30:00Z",
"likes": 42,
"comments": [
{"user": "bob", "text": "Where??"}
]
}
]
}

Look at that structure. It's not normalized. It's not in third normal form. The comments are inside the post, which is inside the user object. It's nested, hierarchical, and completely denormalized.
To a database administrator trained on Codd's relational model, it's a nightmare.
To a web developer trying to build something quickly, it's perfect.

What SQL Takes Away

Remember what SQL forces you to do. You have a business domain—say, a social network. You've got users, posts, comments, likes, follows, messages, notifications. Each of these is rich with context and nuance. A comment isn't just text—it's a moment in time, a reply to a specific part of a post, maybe it has a gif attached, maybe it's nested three levels deep in a conversation.
But SQL says: Flatten it. Normalize it. Put users in one table. Posts in another. Comments in a third. Likes in a fourth. Now connect them with foreign keys. Make everything look the same.
This normalization was sold as eliminating redundancy and ensuring data integrity. And it does that! But it also eliminates meaning. The subtle interconnections—the fact that this comment was made in the context of that conversation, on that post, at that moment—all of that context is shredded across multiple tables.
Want to display a post with its comments? You write a JOIN. Want to include the users who made those comments? Another JOIN. Want their profile pictures? Another JOIN. Want the count of likes on each comment? A subquery or another JOIN.
By the time you're done, your query looks like this:
SELECT
p.post_id, p.content, p.timestamp,
u.username, u.profile_pic,
c.comment_id, c.comment_text, c.comment_timestamp,
cu.username AS commenter_username,
COUNT(l.like_id) AS like_count
FROM posts p
INNER JOIN users u ON p.user_id = u.user_id
LEFT JOIN comments c ON p.post_id = c.post_id
LEFT JOIN users cu ON c.user_id = cu.user_id
LEFT JOIN likes l ON c.comment_id = l.comment_id
WHERE p.post_id = ?
GROUP BY p.post_id, c.comment_id
Want to print your doc?
This is not the way.
Try clicking the ··· in the right corner or using a keyboard shortcut (
CtrlP
) instead.