Trendshredders: On Tech and Early Adoption

Share this article

“While technology might do a good job of bridging the gap between us, we have to wonder if the distance was ever there in the first place.”

A few years ago, we published a now-deleted post called “Rebuilding Babel: On Ethics and AI.” 

It was supposed to be an explanation of our AI-use policies, but the former teammate who wrote it couldn’t resist the conclusion that all tools are neutral; the morality of technology is in how we use it. It’s a common refrain, and there’s a comforting sort of sense to its balance. We can all have our cake and eat it too—AI is right for you but wrong for me.

There’s a reason we removed the post.   

The Law of Retraction: A 2026 Reflection on the Morality of Technology

Neutrality is the most subtle fallacy, and it’s one that’s dangerous in its comfort. Nothing is designed without a purpose, and that’s the problem. We’re in a moment where technological terms of engagement are constantly redefined, but it’s started to feel normal: 

Netflix drops our favorite show right as they announce a $2 increase. 

                  Eh, we think, we’ll cancel it later.

A software company states that you don’t own the app, just the temporary right to use it.

                  Well, it seems, it’s functionally the same. 

A line buried in the Instagram user agreement surrenders your private DMs to train chatbots.  

                  That’s okay, we suggest, I wouldn’t send anything questionable.

A whistleblower calls out coworkers for watching private sexual content from Meta glasses.

Hm, we reason, people should have known better than to use their glasses so carelessly.

Headlines like these are so commonplace that they’re mundane, and for many of us, the push-pull of whether or not to engage has become a shrug of passivity. I like Instagram. I’m not deleting it. 

The morality of technology is about more than privacy, and it’s something deeper than ownership. It’s about who defines our relationship to reality and each other.

I want to introduce two ideas here that feel relevant in 2026:

  1.       We really need genuine human connection.
  2.       Jean Baudrillard’s Simulacra and Simulation suggests a reason for why the first point is becoming increasingly difficult. 

A lot of our Amenable brand-speak centers around relationships and thinking about the kinds of care we owe to each other, but technology does something unexpected: it reinterprets those relationships for us. Beyond algorithmic anxieties and Luddite handwringing about morality and technology, every new tool fractures the simplicity of human connection even when it facilitates it. 

But why?

From Lilo and Stitch to Download and Switch

Bear with me here, but I’m going to invoke a heady essay: Jean Baudrillard’s Simulacra and Simulation.  

The long and short of the aforementioned essay is that we gravitate toward artifice in our definition of reality until the distinction is irrelevant. In 1981, Baudrillard provided the example of Disneyland, ostensibly appealing because it offered escapism, but actually because it simulated American values until they were canonized as real. To put it another way, Disneyland felt like a window to another world, but it was more like a funhouse mirror, reflecting and subtly reinterpreting how people saw themselves.

To use a cliché, the medium became the message. The representation became the reality.

To modernize this a bit, think about how Disney makes live-action versions of all of its animated films. Gradually, we stopped talking about whether they were good movies and started talking about if they were good copies. We talk about if Will Smith’s Genie can compare to Robin Williams’s Genie, not whether the character works narratively. As I write this, the live-action Moana trailer is freshly online, and the general consensus is, “Wait, who is this for? What is this doing except pointing to a thing we already recognize?” It isn’t engageable as a movie, and yet Disney is such a powerhouse that now this is what all movies are. The entire theater experience has been reinterpreted for us, and we don’t have many “real” alternatives.

The same is true of technology as a whole. 

Our tech platforms used to do their best to represent recognizable corollaries. The telephone brought long-distance friends into the room. The television brought the stage to the screen. This insistence on authenticity made it hard for them to have much of a worldview and easy to reject what little they did. The morality of technology was often moot because its purpose was so limited. Nevertheless, it still shifted how we engage with people. Consider something as simple as email—they resemble letters, but every UX decision is ultimately based on someone’s interpretation of how you should relate to other people. It’s innocuous, and yet, emails now have such a rigid rhetorical purpose that each time you write one, you’re likely thinking more about what it is supposed to sound like than the person on the other end. The form matters more than the substance. (I mean, that’s why it’s appealing to just have ChatGPT do it, a move that further redefines what is “real.”) 

But where earlier technology had real-world relational patterns to mirror, our current moment is self-referential to a fault—the live-action remake of an animated version of reality, and one that has a clear agenda. It’s often touted as fixing a problem, but are those problems actually there? Think about how Twitter originally lent itself to the quippy cadence of a conversation before morphing into viral soundbites and eventually diatribes when Elon Musk upped the character limit. That was an intentional redefinition of how people were meant to interact with each other, but because it was modifying something without a “real world” equivalent, it was easy to look past a pretty drastic enforcement of one man’s worldview. 

So what?

Morality, Technology, and a Takeaway

AI is where these ideas reach a fever pitch. As so much of our daily life occurs in a purely digital space, our relationship to reality is more malleable. We’ve been trained to view Google as a sort of neutral repository of knowledge, rather than a business with its own priorities, and this is further reinforced with how it incorporates AI search. Knowledge is good, right? Isn’t it even better to have a tool that makes sense of it for you? Likewise, each time we ask ChatGPT a question, its answer is carefully curated with hidden biases that can be edited at a whim. Again, take a look at how Elon has made Grok more effusive about himself. 

Objective truth has never been more subjective.

More importantly, as technological development continues to trudge forward, we are left with fewer “real” options. It’s much easier to send an email than a letter, and it’s much easier to send a text than an email. It’s even easier to have an AI write that text, not as you, but as an external interpretation of you—a solution to an invented problem. These “representations” make life very easy, but they also make it less ours.  

Microsoft Copilot would like me to make that paragraph more concise and less adamant.

My point is this: any tool that incentivizes its use also ensnares its users, and we cannot easily walk it back once we accept a technology, especially one that claims to help us communicate better. Think about how the marketing for AI tools has sidestepped questions of the morality of technology to instead insist that technology is moral. The invitation becomes obligation becomes mediation, and while a language model might do a great job of bridging the gap between us, we have to wonder if that distance was ever there in the first place.                  

For more reflections on this subject, I encourage you to read Kate Lindsay’s excellent “Stop Saying We’re Cooked”, a piece that was in the back of my mind as I wrote.


Let's build something good together.
Stay connected with Amenable—get exclusive insights & recommendations straight to your inbox: