Tuesday, March 31, 2026

AI Slop, Human in the Loop, and When Languages Still Matter

I've said multiple times, AI is the biggest evolution in software engineering and technology I've seen in my career of 25 years. The capabilities we are seeing in code assistant tools backed by powerful LLMs have gone from simple auto-complete and chats just a couple of years ago, to today being sophisticated implementation capabilities via local or asynchronous cloud agents doing many tasks in parallel to solve problems.

That is not hype, and it is not theoretical. It is happening right now.

I have had a lot of conversations lately with smart people across our industry, and one theme keeps coming up over and over: even the people closest to this space are a bit flabbergasted by how rapidly and exponentially AI is transforming software engineering. We are watching the ground shift under our feet in near real time. What makes this moment feel different is not just that AI can help, but how fast the tools have evolved. We have gone from, "that completion was handy," to systems implementing real features, generating scaffolding, wiring up infrastructure, writing tests, and solving problems across multiple files and concerns. That is a massive jump in a very short amount of time.

It really is incredible what these tools are doing.

We are hearing more and more statements along the lines of, "I'm not even hand-rolling code anymore," or, "I haven't written a line of code in X amount of time." A couple of years ago that would have sounded absurd. Today it sounds more commonplace. That alone tells you how much has changed.

At the same time, for all the excitement, I keep coming back to the same tension. There is a huge difference between accelerated software development and uncritical acceptance of whatever the machine spits out. That is where the phrase 'AI Slop' starts to matter.

The Rise of "Good Enough"

One of the big philosophical questions right now is whether we are heading toward a world where programming languages matter less and less. Are we going to get to a point where languages do not matter? Do we care what the output is? That is a great question, because part of me understands why people ask it. If the output is good enough, it works, you write tests, you ship it, and the result is correct, is that enough?

In some cases, maybe it is.

I think that is where software may start to fall into different buckets. For some kinds of applications, especially non mission-critical internal enterprise apps, "good enough" may actually be good enough. Maybe it is an HR workflow tool. Maybe it is some internal productivity application. Maybe it is a lightweight dashboard, admin tool, or prototype that helps a business move faster. In those situations, teams may be more willing to accept AI-generated output that is not elegant, not especially idiomatic, and not something a senior engineer would be proud to hand-craft line by line.

If it works, passes tests, and solves the problem, a lot of organizations are going to say, ship it. There are parts of me beginning to see this side of the equation.

But there is another category of software where that mindset breaks down very quickly. You might accept that approach for some internal line-of-business application, but would you fly on a plane with code created that way? Would you use a medical device created that way? That is where this discussion gets serious fast.

Human in the Loop Is Not Optional

The phrase "human in the loop" gets used a lot, and sometimes it can sound like a comforting slogan. I think it is a lot more than that. It is the thing standing between powerful acceleration and dangerous overconfidence. Because the problem is not just messy variable names, awkward abstractions, or code that feels a little off. The deeper problem is that these systems can fabricate details with total confidence.

That is the part people underestimate.

I was reflecting on an example recently where "good enough" was peeled back and examined under the covers. It was not about style nits or whether the AI chose the perfect algorithm. It was about the model inventing things that mattered, such as security-related details and keys. And when challenged, it effectively admitted, "To be quite frank, I just made it up."

That is funny for about two seconds.

Then it becomes a sobering reminder that an LLM is not a truth machine. It is not reasoning in the way many people emotionally want to believe it is. It is an incredibly powerful prediction engine that can produce brilliant results and absolute nonsense, sometimes in the same output. That is why human review is not some temporary training-wheel phase we can casually discard. It is part of the engineering discipline.

Human in the loop means architecture review still matters. Code review still matters. Threat modeling still matters. Testing still matters. Domain expertise still matters. Knowing what looks right versus what is actually right still matters.

This is where being able to "call a spade a spade" is immensely important when using AI to generate code. This is also why we as experienced engineers are highly valuable in this AI age. We can do this.

It also means accountability still lands on us. The AI does not carry the pager. The AI does not sit in the postmortem. The AI does not own the security breach, regulatory failure, lawsuit, or customer impact. We do.

So, Do Languages Still Matter?

I think the honest answer is yes, even if the way they matter begins to shift.

If AI keeps getting better at generating working code, then many developers may spend less time manually authoring syntax line by line. That part is probably true. The abstraction layer is rising. In some workflows, we may express intent more than implementation. But that does not mean languages stop mattering.

Languages still shape ecosystems, performance characteristics, deployment models, memory behavior, concurrency patterns, tooling, maintainability, interoperability, and the kinds of mistakes that are easy or hard to make. Languages still influence how systems age over time. They still matter when debugging. They still matter when optimizing. They still matter when the generated output is subtly wrong and somebody has to understand why.

Even if AI becomes the primary producer of code in many cases, humans still need to evaluate the tradeoffs. You may not type every line, but you still need to understand the consequences of what was produced. That is especially true for consultants, architects, and senior engineers. Our value increasingly shifts from, "I can manually write more code than the next person," to, "I can guide systems, evaluate output, recognize risk, and make sound decisions with accelerated tooling."

That is a meaningful shift, and I do not think it diminishes engineering. If anything, I think it raises the bar.

Nobody Has a Crystal Ball

The bottom line is simple: nobody has a crystal ball. Nobody knows exactly what tomorrow holds. We can say 'AI' the same way the industry once said 'cloud' or 'DevOps', but that does not mean we fully understand where it is taking us. We know it is transformative. We know it is already changing how software gets built. We know it can unlock incredible productivity. We also know that parts of the industry are getting ahead of themselves and treating confidence as correctness.

Both things are true at once.

That is why I think the healthiest posture right now is neither cynicism nor blind enthusiasm. It is really more of an optimism tempered by discernment. We should be using these tools, learning them deeply, and pushing them hard, because they can genuinely take a lot of repetitive work off our plate, increase throughput, and help us think bigger and move faster.

But do not confuse fast with sound.

Do not confuse output with understanding.

And definitely do not confuse "it runs" with "it is trustworthy."

AI is changing software engineering faster than anything I’ve seen in 25 years. I’m excited about it, and I’m using it heavily. But I’m also convinced that the more we lean on AI to generate software, the more human judgment, review, and technical depth matter.

Languages, architecture, and accountability still matter, maybe now more than ever.

The tools are incredible, but the responsibility is still ours.

Tuesday, June 17, 2025

I'm Speaking! TrailBlazor Conference 2025

On June 26th, 2025, Devessence Inc., in partnership with Syncfusion, will host the TrailBlazor Conference, a free one-day virtual event showcasing the spirit of excitement and innovation within the .NET developer ecosystem featuring Blazor, .NET MAUI, .NET Aspire, and Oqtane. 

We have lined up some amazing speakers for the event, including members of the Blazor, .NET MAUI & .NET Aspire product teams from Microsoft, as well as many other respected leaders and peers of mine within the .NET community. 

I'll be doing the session, "Zero to Blazor Apps with GitHub Copilot," at 1PM ET (5PM UTC) which showcases using GitHub Copilot to accelerate and enhance developing Blazor applications. Registration is open at https://trailblazor.net - sign up today!



Wednesday, January 29, 2025

Check Out My YouTube Channel! 'The Eclectic Dev'

As technology has evolved over the years, so has the way we have to share information with the global engineering community. To that end I created another way to reach out and share software engineering and technology information through, 'The Eclectic Dev' my YouTube channel.
Come along with me and learn about the endless world of software engineering! I'll explore a wide variety of topics. Specializing primarily in web, .NET, and cloud related technologies, I'll journey these areas and beyond to many eclectic topics of interest to share knowledge with the wider global technology community.
Check it out by clicking on the channel logo below, and please subscribe to stay tuned for a host of eclectic topics and as an extension of this blog which began over 17 years ago!



Wednesday, November 6, 2024

How to Nest Blazor's .razor Files in Visual Studio Code

When working with Blazor in Visual Studio Code, you may encounter some nuanced differences from working in Visual Studio, and would like greater feature parity. One such feature is the default file nesting as shown below from Visual Studio when it comes to Razor component files:

Visual Studio Code has the capability to nest files, but by default Blazor's files are not nested and appear in parallel. 

To update/fix this, we can update the 'File Nesting Patterns' in the Command Palette.

Open the Command Palette in Visual Studio Code by pressing Ctrl+Shift+P (PC) and then add the following search string for direct access: 

@id:explorer.fileNesting.patterns

Select 'Add Item' with the key: 

*.razor
and use the following Value:

${capture}.razor.cs, ${capture}.razor.css, ${capture}.razor.scss, ${capture}.razor.less, ${capture}.razor.js, ${capture}.razor.ts

You can add any applicable file extensions to the list above as needed. At this point you can close the settings and see the Blazor files nested correctly:

Thursday, May 23, 2024

How to Enable 'Hey Code!' Voice Interactivity for GitHub Copilot Chat

With all the smart devices around the house that can be queued with the likes of 'Hey Google!' wouldn't it be great if we could queue up GitHub Copilot in the same manner either for ease of use or required for accessibility needs? Thankfully this isn't too difficult to configure in Visual Studio Code. Once configured you can say, 'Hey Code!' and use a voice prompt to interact with GitHub Copilot chat.

GitHub Copilot Chat 'Hey Code!' Configuration Steps

  1. Open the Command Palette via Ctrl+Shift+P or F1
  2. Type in 'accessibility' to access configuration options and select, 'Preferences: Open Accessibility Settings'
  3. Add 'voice' to the configuration filter and select 'Accessibility > Voice: Keyword Activation'
  4. Select an option for where Copilot Chat interacts with you after saying aloud, 'Hey Code!' in the IDE:
    • chatInView: start a voice chat session in the chat view (i.e. the Copilot Chat main window)
    • quickChat: start a voice chat session in the quick chat (i.e. Command Palette input)
    • inlineChat: start a voice chat session in the active editor if possible (i.e. inline Copilot Chat dialog)
    • chatInContext: start a voice chat session in the active editor or view depending on keyboard focus (i.e. if the current cursor is focused within code in a file, the inline Copilot Chat dialog is used, and if the active cursor is in the Copilot Chat main window this will be used to capture the dialog)
My preference is to use chatInContext as it will toggle inline vs window chat based on current focus, but play around with the options to see which is best for you.

A quick way to access these settings once configured is to press the microphone icon in the bottom task bar of Visual Studio Code which will immediately pull up these same settings to modify directly.


Now try it out and say, "Hey Code! Help me create a new Blazor web application!"

Tuesday, February 6, 2024

How to Fix the GitHub Copilot Chat Error: 'Cannot read properties of undefined (reading 'split')'

If you're using GitHub Copilot Chat within Visual Studio code, you may begin to see an error unexpectedly after an IDE update that reads as follows when using Copilot Chat:
Cannot read properties of undefined (reading 'split')

This is caused by the Copilot Chat extension requiring a reload which can be seen from the extensions menu:

Once selecting, 'Reload Required,' Visual Studio Code will reload, and Copilot Chat will begin working as expected again.

Sunday, November 12, 2023

Blazor WebAssembly Lazy Loading Changes from .NET 7 to .NET 8

Lazy Loading is an essential tool used in web client development to defer loading of resources until requested by the user, as opposed to loading everything up-front which is expensive. Blazor has had the ability to lazy load Razor Class Libraries for the last several versions of .NET, but there are some updates in .NET 8 that aren't well documented.

To not be completely repetitive, here are the basic steps for implementing lazy loading in your Blazor WebAssembly application direct from the Microsoft docs: Lazy load assemblies in ASP.NET Core Blazor WebAssembly

The issue is the Project Configuration and Router Configuration sections of the docs as of this post are still not up to date. With .NET 8, the WASM assemblies are now built as .wasm files not .dll files and therefore the main update you'll need to make are inside the .csproj file and within the definition of the LazyAssemblyLoader routing code to use the .wasm file extension for the referenced .dlls:

.csproj file updates


Routing file updates


If continuing to use the old code with the .dll extension you would get the following error on building your application:
Unable to find 'EngineAnalyticsWebApp.TestLazy.dll' to be lazy loaded later. Confirm that project or package references are included and the reference is used in the project" error on build
Upon making the required updates to your .NET 8 application to prevent the error above, your app should successfully build, and you'll see the correct deferred execution in the browser.