Front-end education for the real world. Since 2018.





Are people’s bosses really making them use AI tools?

Andy Bell

Topic: Opinion

This is not the usual type of content you will have come to expect from Piccalilli, but I feel like this topic, specifically, is an important aspect of our work to cover because as I see it, making or encouraging your development staff to use AI tools in their work is extremely short-sighted and risky.

I want to support that stance with some conversations I’ve had with people actually doing the work and their mostly less than favourable experiences.

I asked this across question social media:

Is your boss encouraging you to/making you use AI tools for development?

I’m thinking about working on a piece about that on Piccalilli.

It’s sensitive for sure, so more than happy for people to be anonymised.

The reason I asked was because — as you can imagine — I speak with a lot of developers on a day-to-day basis. A lot of the time in my personal network, these are very experienced, senior developers, but I’m hearing the same stories from juniors too. It boils down to:

My boss is making/encouraging me to use AI every day and during every part of my work.

I had an urge to explore this further — making sure I wasn’t in an echo chamber — and, yeh, you’re probably not going to enjoy what I discovered.

Before we dig in, allow me to set some ground rules and factspermalink

  1. If you’re a fan of AI companies/tools, this article is not a personal attack on your preference
  2. This article does not de-value the good stuff you might feel like you’re doing with AI
  3. All participants are completely anonymised for their privacy and protection
  4. I have re-worked some of the responses to assist with point 3
  5. Any opinions are mine unless specified

What I learned from my conversationspermalink

I’ve had several conversations with developers and designers working across the industry for this piece, all with varying experience levels.

I spoke with a developer working in the science industry who told me, “I saw your post on Bluesky about bosses encouraging AI use. Mine does but in a really weird way. We’re supposed to paste code into ChatGPT and have it make suggestions about structure, performance optimisations”

I pressed further and asked if overall this policy is causing problems with the PR processes.

In reference to their boss, “It’s mostly frustrating, because they completely externalise the review to ChatGPT. Sometimes they just paste hundreds of lines into a comment and tell the developer to check it. Especially the juniors hit problems because the code doesn’t work anymore and they have trouble debugging it.”

“If you ask them technical questions it’s very likely you get a ChatGPT response. Not exactly what I expect from a tech lead.”

Immediately, I thought their boss has outsourced their role to ChatGPT, so I asked if that’s the case.

“Sounds about right. Same with interview questions for new candidates and we can see a lot of the conversations because the company shares a single ChatGPT account.”

I asked for further details and they responded, “People learned to use the chats that disappear after a while.”

That’s pretty horrifying, I’ve got to say. Not just some of it, but all of it. Maybe I’m sensitive because I am people’s boss and couldn’t fathom outsourcing my responsibilities to a technology that often gets things completely wrong.

Let’s move on to another conversation I had with with a team lead in an agency. Something I have a lot of experience with!

“My company is pushing AI tools across the company in branding, copywriting, design, stock photo creation and of course development. They want to be the ‘first AI agency’ and are basically telling us to get on board or you’re not a fit here any longer.”

Pretty harrowing stuff. This isn’t much of a surprise to me though so far because unfortunately, a reasonable portion of agencies will do everything to cut corners on a project to increase profits.

It’s been that way forever and it’s fundamentally why — understandably — organisations don’t trust agencies. A lot of my work is building that trust in the sales process to counter that.

I asked how their agency is currently billing clients and whether it’s retainers or fixed-fees.

“We have production clients who are on an overall fixed fee and monthly retainer clients whose contracts are set by the number of hours they want to buy.”

Seems like a pretty standard agency setup to me. I wanted to dig deeper on the culture though, so I asked, in reference to “…telling us to get on board or you’re not a fit here any longer” is causing fear amongst their colleagues.

They responded, “I would definitely say it’s causing some fear that they aren’t good enough / falling behind if they aren’t using it regularly.”

“The managers like to quote ‘AI won’t replace you, but a developer using AI would’ as a way to motivate certain team members to use the tools”

Sounds more like a threat to me than a motivation. I asked if their managers are effectively threatening to replace people, not on board with AI and if that worries them.

“In a way yeah but I don’t think they could directly get rid of them for that reason, more potentially make it uncomfortable so they’d leave.”

“It’s not a great feeling to have and to be honest, I think it’s a wider worry about the industry as a whole, as I feel a lot of agencies will be jumping on the AI train.”

“…I do worry about some of my team members and the direction of the company overall — I’m struggling to find the same motivation I had 12-18 months ago”

AI use and a culture of encouraging staff to use it isn’t isolated to this one agency. I spoke with a designer at another agency and they said, “Yes I work at a small digital agency and we’re being encouraged to use tools. Not particularly for image generation (outside of ideation or mood-boarding), but more for summaries, research and some copywriting.”

“I was very vocal about it at the beginning and think I managed to get a bit of a reputation for being difficult for my views, so I’ve yielded a little bit on certain tools, but still [I’m] very clear about what I think is appropriate for clients and almost certainly make sure we disclose when we use it.”

Even though AI tools seemingly aren’t directly used for the creative design output, they’re being used during the creative ideation process which for me, as an agency founder is terrifying, especially with some of the non-disclosure agreements (NDAs) I’ve signed over the years.

I also can’t fathom using AI for copywriting either.

I also spoke with a software engineer for a huge, global retailer whose organisation is conducting a big push to leverage AI. AI has now become a requirement there. I asked if the organisation has provided correspondence that people’s jobs will be at risk if they’re not on board.

“Thankfully I have not seen any wind of it. There is a lot of discussion about how to embrace it [AI] and how we can help the next wave of engineers coming in from colleges to be ready for the transformations that will happen to the way we work.”

“I’m sure as the tech matures and we adopt specific tools organisation-wide, those discussions may happen. Right now we are still piloting different tools and figuring out what does and doesn’t work for the organisation.”

I asked if this was more of a pragmatic process, rather than a rushed, reactive process.

“I think it’s being done with a sense of urgency to embrace new technologies and how they can help us but not to the point where the average engineer would feel overly pressured or threatened by the push.”

“I’ve been enjoying my journey into how to leverage AI but I think for newer engineers or engineers looking to climb the ladder, like myself, it inherently adds pressure to be an earlier adopter and be one to spread knowledge early.”

I asked if there had been any disasters.

“Thankfully nothing in my area and I’m sure any team would try to keep those slip ups under wraps. I will say that in my personal experience little edge case bugs are more prevalent and no matter how careful I am and reread all code over and over I still manage to miss stuff when I don’t manually type it.”

“I’m pretty proud of raising solid PRs with low rates of bugs reported, but these last two sprints, I’ve had a few things slip past me due to the change of workflow. I’m sure this is something a mindful engineer won’t struggle with for long as we adjust to the new way of working.”

Let’s take a look at one more because I’m aware this article is getting very long. Sorry about that.

I had a developer reach out about AI very much being forced in their organisation.

“The CTO at my previous job tried Claude Code and really liked it so he said that all the devs had to use Claude Code in our work for generating code, generating tests, debugging, and validating design.”

“If we asked him a question on something he would tell us to ask Claude first. I never found Claude useful. It couldn’t debug anything and I didn’t like how carefully I’d have to comb through the code it generated to find the subtle bugs it would inject. The design validation was basically just telling us what we wanted to hear, which the CTO loved”

This is the thing about AI tools. They are by design going to honour your prompt, which often results in your AI tool agreeing with you, even if you’re wrong.

I asked this person if their boss was effectively off-loading their responsibilities to AI too. I also asked about more information about design validation.

“He was trying to off-load responsibilities to Claude but Claude never gave us a good answer so it would just add an extra layer of back-and-forth to solving the problem.”

“Design validation is looking for possible performance, security, or concurrency issues in the design of our system. Claude would always have some generic answer that didn’t fit our specific circumstances so it was taken as validation that our design was good”

Again, AI tools will validate you, even if you’re wrong.

“I think the fact that Claude didn’t have anything real to say about our designs was taken as validation that the design was good.”

“There was an assumption that if the design had an issue then Claude would catch that and say something relevant to it. I don’t think he ever considered the possibility that Claude wasn’t saying relevant things because it couldn’t do that. It would only be able to regurgitate generic advice you find on the internet about good software design”

A lot of design critique is based in analytical, creative and soft skills, along with lots of experience. AI is completely incapable of doing all of that.

LLM is an acronym for large language model and that is exactly what these tools are — language processing systems. So am I surprised that Claude is just regurgitating generic design advice? No.

These tools are incapable of creating and analysing. They are only capable of pattern matching and regurgitating what has been fed into them during the training process.

Remember when Google’s AI summaries were encouraging people to add glue to their pizza sauce, for example? Yeh, that was a Reddit joke comment which has now been removed but it’s a good example of AI regurgitating what it has learned without to capability of determining the fact that comment was a joke.

Regurgiting, not creating. It’s what these tools do.

Some advice for navigating all of thispermalink

I’d say my overarching advice, based on how difficult tech recruitment is right now, is to sadly play along. But — and I cannot stress this enough — make sure you document everything.

What I mean by that is every single time AI tools cause problems, slow-downs and other disappointing outcomes, document that outcome and who was responsible for that decision. Make sure you document your opposition and professional advice too.

In fact, away from AI, I’d recommend documenting this stuff in general. It’ll be vital if you ever find yourself in a disciplinary and/or tribunal situation. You don’t know where the code your tool spat out came from either. Documenting who is responsible helps to protect you, individually, if litigation is raised against your organisation. If it is not your decision to use these tools, make that known, officially.

It’s quite clear we’re in a bubble with AI, or at best, a hype cycle. For example 95% of generative AI pilots failed in a recent report from MIT and even when developers thought AI was making them faster, it was actually making them slower in another study.

Billions are being poured into this technology but big tech companies — the so-called “magnificent 7” (cringe) — can afford those losses, generally. Ethics from decision makers are clearly not even a factor any more in these organisations too.

My worry is that — as always — workers will be the ones to suffer as the bubble/hype cycle bursts. What I’m directly advising you to do is protect your interests right now.

Unionise.

Wrapping uppermalink

To everyone who I spoke to and haven’t featured in this article, I’m sorry for excluding your input. This article is very long already. Just know, the conversation we had was incredibly useful in producing this article and I really enjoyed talking to you. Thank you.

This article has been really hard to write, I’m not going to lie. I really enjoyed my conversations but quietly, I was hoping I was wrong with my theory that AI tools were being forced, rather than being used on merit.

It’s so typical of the tech industry to jump on a shiny new thing, completely forgetting how much harm it causes, such as ChatGPT teaching someone how to more effectively commit suicide (extreme content warning on that link). Let’s not forget how much content was stolen in the first place — and continues to be “scraped” — to train models too. Legal cases have rightly been raised.

I would say I’m very much an AI-sceptic. That’s not a position from refusing to try AI tools though. We’ve tried these tools for quite a while and after that extended period, found them to be more often than not, a complete hindrance, albeit quite useful in certain contexts. Your mileage may vary though! We know our position and that’s what’s important to us.

Regardless of your opinion of AI, forcing the usage of it is almost certainly going to end in disaster. Just be prepared for that disaster and protect yourself is my overall advice.

Enjoyed this article? You can support us by leaving a tip via Open Collective


Newsletter