I’m an AI skeptic. That’s not true. I have been cautious about rapidly incorporating AI tools into my everyday work, even as usage and excitement by others have grown exponentially in the last year.

Recently, though, I’ve been playing with AI solutions more frequently. I’ve been frustrated with the quality of Google search results and started exploring Perplexity AI. It was helpful when I looked up information in the heat of NCAAW March Madness and appreciated the structure of the results and the linked references to validate what was provided quickly. There were instances where I needed it to go deeper or in a different direction, but, in general, it felt like I was training up a new research assistant than my future overlord. But, just a few days after using it as my primary search tool, Casey Newton revealed they plan to add sponsored questions into the mix—the exact kind of nonsense I’d been hoping to avoid.

That same week, Axios AI+ and other outlets released several articles highlighting my AI concerns:

AI firms think that anything publicly available is fair game. Like many, I’m stuck on the ethical challenges in developing every one of these models. Most, if not all, ignore their policies, the terms of service of other brands, and copyright law as they scrape the internet for inputs and training materials. We make limited series and movies about how problematic startup and disruptor business culture is, yet we’re watching it happen all over again. 

In general, I think most content-generating AI models make shitty art—though just yesterday, people debated whether a leaked diss track from Drake was real or robot rapping—but it is scary enough for over 200 musicians to protest against AI tech developers collectively. Just because I think most of it is soulless and disturbing if you spend more than just a little bit of time with prompt-rendered images, video, and music, that doesn’t mean bad actors and unskilled keyboard jockeys won’t flood the zone with junk. I have already experienced this with real humans making generic beats while borrowing old vocal tracks from established artists to get into Release Radar and other algorithm-generated playlists on Spotify every week. 99% of these tracks are trash and—I must assume—not benefitting the artists whose coattails they are trying to ride.

Meanwhile, AI is doing little of what we imagine it is. It’s not running the Whole Foods walk-in/walk-out stores. And despite the many influencers I see—and probably muted—who talk about using AI to replace entire chunks of their jobs, this is not a set-it-and-forget-it solution.

We’re exploring AI solutions for business intelligence use cases on my team. Specifically, analysts are using corporate-sanctioned tools to ease the analysis and reporting burden of A|B tests as they increase at a faster rate than the size of our team. AI is a good assistant, but it still requires review and validation. It hallucinates less as our prompt writing adapts to the outputs—note, as we adjust to it and not so much as it adapts to us—but humans are still required as a critical part of the process and will continue to be.

Despite all of these concerns, I’m not afraid of AI. I’m an optimist and inclined to think about artificial intelligence tools as more like the droids in Star Wars: brilliant assistants who are constantly in service of what sentient beings are trying to do.

Let’s ignore General Grievous and his droid army in this metaphor. 

The prequels aren’t very good, anyway.


Photo by Kenny Eliason on Unsplash

Leave a comment