Back to archive

Article

AI in IT: Brilliant, Terrifying, and Probably Here to Stay

AI is making software work dramatically faster and more accessible, but privacy, security, reliability, and trust still come with serious tradeoffs.

AIITprivacysecurityautomationcoding

Artificial intelligence has reached a point where it feels almost impossible to ignore. It is useful, wildly capable, and in many cases genuinely life-changing. It is also, to be honest, a little terrifying.

Every day, more people are giving tools like Claude Code and Codex access to their codebases, their workflows, and sometimes even their entire computers so these systems can complete tasks on their behalf. From a productivity standpoint, that is incredible. From a privacy and security standpoint, it raises some very uncomfortable questions.

The trust problem

When you think about how much sensitive information can end up in the hands of AI companies, the situation starts to feel much less futuristic and much more unsettling.

These companies may be required to keep logs and conversation records for long periods of time, potentially months or even years. That alone is enough to make anyone pause. Add to that the fear that personal data, medical information, private chats, and other sensitive interactions could become part of training pipelines, and the excitement around AI starts to come with a very heavy asterisk.

Then there is the reliability problem. I have seen a worrying number of stories online where AI agents do things no sensible developer would ever approve of, like deleting a production database or making destructive decisions that even a junior developer would think twice about.

That is not just a bug. That is a reminder that giving autonomy to a system without proper safeguards is less "innovation" and more "speedrunning chaos."

Why people keep using it anyway

Still, if we set those concerns aside for a moment and focus on the benefits, it becomes obvious why AI has become so appealing in daily life and in the IT world. With enough sandboxing, guardrails, and preventative measures, it should be possible to get the upside of AI without inviting every possible downside in through the front door.

And the upside is very real.

Manually doing assignments or homework increasingly feels inefficient. Building hobby projects from scratch by hand, or even working through parts of a professional software project manually, can feel unnecessarily exhausting now. AI has changed the scale of what one person can realistically accomplish.

A normal 9-to-5 employee who wanted to build a small web app for personal use might have needed months of evenings and weekends to make it happen. Now, that same person can step back, look at the bigger picture, give clear and strict directions to a coding agent, and see results in minutes.

That is a massive shift. AI is not just saving time. It is changing who gets to build things in the first place.

The adoption hurdle

I suspect that, over time, companies and schools will start treating AI as a normal part of productivity rather than some suspicious shortcut. Eventually, it may become as accepted as autocomplete, autofill, or spellcheck. At first, those tools also felt like cheating to some people. Now they are just part of the furniture.

That said, I think one major step still needs to happen before organizations fully embrace AI in everyday processes: powerful self-hosted AI needs to become practical.

Right now, there are open-source options and local tools that people can run themselves, such as models served through platforms like Ollama. They are useful, and in some cases impressive, but they are still not at the level of the flagship models from OpenAI or Anthropic.

What we really need are models that are both powerful and realistically self-hostable, along with hardware that is affordable enough for companies to justify buying and operating it.

Why self-hosting matters

Self-hosting changes the trust equation completely. If the model runs on a company's own hardware, inside its own network, then the risk of leaking secrets becomes much smaller. Sensitive internal data stays internal. For schools and companies alike, that could make AI adoption far easier to justify.

Unfortunately, that future still feels far away.

Computer hardware is already extremely expensive, and ironically, AI itself is part of the reason why. As demand for computing power keeps rising, hardware prices continue climbing. Production is being pulled toward large-scale AI infrastructure, and companies that once focused more heavily on consumer hardware are increasingly shifting their attention toward building for AI demand instead.

There is also the awkward financial reality behind many AI companies. A lot of them seem to be operating on aggressive partnerships, huge funding rounds, and the hope that scale will somehow solve the economics later.

I have seen claims online suggesting that some companies lose two to three times the value of a user's subscription. In other words, a $20 subscription may actually cost far more to serve. And that does not even account for free tiers, which are clearly designed to bring users in and eventually convert them into paying customers.

Useful, uncomfortable, unavoidable

So yes, I have plenty of issues with AI. I worry about privacy, I worry about security, I worry about overreliance, and I definitely worry about handing too much control to systems that still make shockingly bad decisions.

But despite all of that, I cannot pretend these tools have not made life easier.

They have.

And maybe that is the strange reality of AI right now: it is both one of the most useful technologies we have seen in years and one of the most uncomfortable. It makes work faster, lowers the barrier to building things, and expands what individuals can do. At the same time, it forces us to ask whether convenience is quietly training us to ignore risks we would never normally accept.

That tension is probably not going away anytime soon.

Final note

In the spirit of honesty, this blog post itself was rewritten by AI. I gave it my thoughts in detail, and the model turned them into something more suitable for a blog post. Which, in a very fitting way, kind of proves the point.