Embodiment, AI, and the Human Question — A Conversation on Technology and Theology with Jared Hayden

Jared Hayden on 3 June 2025 at the MCC Summit Budapest
Tamás Gyurkovits/Hungarian Conservative
In an era defined by rapid technological progress, the relationship between embodiment, theology, and digital innovation is becoming increasingly urgent. This conversation with Jared Hayden, policy researcher at the Institute for Family Studies (IFS) offers a deeply reflective exploration of how artificial intelligence, remote work, and internet culture are reshaping not just society—but humanity itself.

In an interview during the Mathias Corvinus Collegium’s (MCC) Budapest Summit on Technology and Society Policy Analyst at the Institute for Family Studies Jared Hayden shared his insights on the intersection of theology, technology, and human embodiment in the digital era. The discussion covered remote work, artificial intelligence (AI), and the broader implications of technological advancement on society.

***

My first question is about theology and technology—especially embodiment in the digital age. What’s your take on this overall, how does remote work and AI affect it?

A lot of different aspects come into play when we think about embodiment and technology. I think the key thing is that technology isn’t just a social network issue or an innovation issue. It’s ultimately an anthropological problem. It’s about what it means to be human—especially when we’re using tools as powerful as the ones we’ve developed over the last 50 to 100 years.

I was talking to my grandfather recently—he grew up in a rural town in Michigan in the 1940s or 50s. He remembers the day he got running water. He remembers the first time he heard a jet overhead and didn’t recognize the sound. These are watershed moments. The internet was one. And now AI is another.

AI isn’t totally new. Social media algorithms were already a kind of early, nascent form of AI. I think we’ve already gotten used to it in many ways. We live in algorithmic environments—we’re already formed by them.

PHOTO: Tamás Gyurkovits/Hungarian Conservative

About remote work, some of my colleagues at the Institute for Family Studies looked into this. Turns out, fertility rates were higher for married women who stayed home and worked remotely. So you can’t just write tech off. The temptation is to decry it completely—go full Luddite—or to worship it. Or to say it’s neutral. But that’s wrong. These tools are not neutral. They shape us.

‘Technologies aren’t neutral. They come with internal logics, and they affect our values and behaviours’

One of the core mistakes in tech policy is saying: ‘We’ll just take our hands off, let it develop, and people will figure it out.’ Whether that’s the invisible hand of the market or just assuming good intentions, it doesn’t work. Technologies aren’t neutral. They come with internal logics, and they affect our values and behaviours. That’s why McLuhan’s idea that ‘the medium is the message’ matters so much.

Do you think AI is overregulated right now? Some people argue it’s too early to regulate anything so heavily—we don’t even know what AI will become. Shouldn’t we wait and see?

I hear that argument a lot. My colleague Michael and I at IFS work on these issues. He’s big on getting upstream in the policy process. That means shaping tech before it shapes us.

Look at what happened in the US with social media. In the 1990s, Congress passed Section 230 of the Communications Decency Act. The idea was: this is a new industry, let’s not regulate it too much or we’ll stifle growth. But what happened? Those tiny online services became massive Silicon Valley titans. And they weren’t held responsible for user content—even though they had the tech to build safer platforms. They only started caring when Congress threatened regulation.

What we are trying to achieve right now is swim upstream. A lot of the core technologies behind the smartphone and the internet came out of federal R&D. Companies like Apple just synthesized them in new ways. So why not build social and family policy into that R&D phase next time?

PHOTO: Tamás Gyurkovits/Hungarian Conservative

The internet was world-changing. AI could be even more so. And the question is: to what end? If we just take our hands off the wheel, the change won’t necessarily serve the public good. For boomers, a computer might just be a tool. For millennials and Gen Z, it’s a world.

We know now what we didn’t know in the early days of the digital revolution. We have the opportunity to shape this next chapter. Why wouldn’t we use it?

We’re both people who were basically born into the internet, we’re now seeing a generation being born into AI. When it comes to families and especially kids growing up with this technology, how do you see the threats and potential positives?

I was at a panel in DC, and during the Q&A, someone mentioned that at their news organization they were encouraged to use AI to summarize—but not to draft. And I think that’s wise. Writing is a discursive process. It clarifies my thinking. Just asking a statistical model to string words together doesn’t help with that.

But I know people who work more in data and statistics, and they’re using AI to check and amplify their work in really profound ways. The question is: what does that mean for everyone else?

For a lot of people, AI is just like a more powerful Google. That’s how it’s being used and integrated. And that probably isn’t its most beneficial use. But there are some promising developments, like medical research where AI found new applications for existing drugs.

I also heard about a doctor who used AI to scan cancer tissue samples so that nothing was missed. That kind of thing could be life-saving.

PHOTO: Tamás Gyurkovits/Hungarian Conservative

However, on deciding how we use it, we have to be deliberate. We also can’t assume this is a neutral technology. It’s not just about how you use the tool—it’s about how the tool reshapes us. Our relationships, our ways of thinking, our self-conception. A conservative approach should consider those deeper, more holistic levels—not just economics or national security.

Some of us applauded the Trump administration for naming ‘human flourishing’ as a goal. But that has to be defined. It has to be pursued deliberately. You can’t just equate human flourishing with economic competitiveness or job creation. Those things matter—but they’re not enough.

That brings us to regulation and political response. Do you think different ideologies might unite on this issue—or will it remain divided?

I don’t have a concrete prediction. In the US at least, technological innovation isn’t just a left or right issue. Both sides have supported it, even if in different ways or for different reasons. With respect to AI, there’s actually broad agreement that it is something we should invest in—and it’s something the US has already invested in.

But there are doubts about how serious the right is about regulation. Age verification laws for tech have had strong bipartisan support. That’s encouraging. But with AI, it still feels like a lot of people think it’s just a neutral tool—it’s all about how you use it.

The mindset that AI is just a tool delays regulation. It means we won’t act until after AI is already fully integrated into society. That’s what happened with smartphones and social media. And now we’re playing catch-up.

PHOTO: Tamás Gyurkovits/Hungarian Conservative

Last month, the House approved language in a spending bill that would create a 10-year moratorium on state-level AI regulation. That’s deeply concerning. Even if AI is still developing, we shouldn’t block states from acting. It’s naïve to think we can just wait. However, there are growing bipartisan coalitions, especially around concerns like AI in education and tech’s effects on kids.

‘We do need to understand that these companies—these AI developers—are primarily profit-driven. They care about human rights or safety only insofar as it helps their bottom line’

At the moment, however, it’s more like full speed ahead from the Trump administration. Meanwhile, some in the Biden administration were exploring things like classifying AI. I’m not sure that’s the right approach either. But we do need to understand that these companies—these AI developers—are primarily profit-driven. They care about human rights or safety only insofar as it helps their bottom line.

My hope is that conservatives wake up to this and take the lead on regulation—not for the sake of controlling the market, but to promote real human flourishing. Not just prosperity, but the good of the person, of families, of communities. That’s what’s at stake.


More from the event:

Technology, Creativity, and the Soul: MCC Summit Day 1 Debates the Future of AI
French Attorney Stéphane Bonichot Talks to HuCon about AI, Big Tech Regulation
In an era defined by rapid technological progress, the relationship between embodiment, theology, and digital innovation is becoming increasingly urgent. This conversation with Jared Hayden, policy researcher at the Institute for Family Studies (IFS) offers a deeply reflective exploration of how artificial intelligence, remote work, and internet culture are reshaping not just society—but humanity itself.

CITATION