Dear Crucial Skills
I’m starting to see AI-generated writing everywhere, including in emails and messages from colleagues, which feels disingenuous. I understand AI is here to stay, but I’m concerned by how much my coworkers are using it and how it might affect our interactions. I know I can’t control others, and I don’t want to, but I also don’t want my interactions to be mediated by AI without knowing that they are. How can I maintain the integrity of my interactions and my work without dictating what others do?
Signed,
In the Dark
Dear In the Dark,
I appreciate your concern, and I understand why you would feel dismayed to discover that emails you’ve been reading from a colleague were written by a bot. What once was the fodder of sci-fi flicks is now our everyday reality!
I also understand why someone might want to use AI for writing emails, whether they feel unskilled at writing or just want to save time.
The good news is that neither you nor your coworker is alone in your preference, which means the technology will continue to provoke conflicting viewpoints, and those differences will lead to conversations and the emergence of norms around AI use.
So, while it’s true you can’t control what others do, you can influence the development of those norms. Here are a few ideas.
Make Requests with Respect
Share your preferences with peers and ask them to oblige. If there’s a workplace policy in place that supports your preference, point to that. Otherwise, your best bet is to make a polite request.
That might sound like, “May I ask a favor of you? I know we use AI to assist with a lot of work, but I prefer it isn’t used for writing messages or emails—I don’t feel like I’m getting the real you. Can you honor that with me?”
Or, “I understand AI can speed up our work, but I want to be aware when it’s used. Will you share when you’ve used AI on our projects?”
Advocate for a Standard
If your organization doesn’t already have an AI policy in place, bring it up. Without a standard, there can be no clear expectation. Raise the concern to those who can get the conversation on the boardroom table, and share your ideas for what should be included in that policy.
That might sound like, “Hey, I think we need a policy that establishes some guidelines and best practices for how to use AI in our work and with each other. Can we discuss that and escalate to executive leadership?”
Communicate the Standard
If guidelines are in place, they need to be communicated. Ask leadership to share the AI guidelines and discuss them openly and consistently.
If this isn’t happening, make it a point yourself to share and discuss them with the people on your team and those you interact with.
Have conversations to clarify expectations. “Hey, we have these new guidelines in place around AI use, and I think it’s important we stick to them. Have you seen them?”
Put Up Signs
Use signposting to remind people of the practices regarding AI use. This might occur via email, in messaging apps, as popups, through regular review meetings, etc.
Promote Transparency and Human Involvement
I assume it’s not common practice, but some organizations have an AI policy that requires any work generated by AI be disclosed to internal stakeholders.
To me this seems like a good practice for two reasons: Transparency fosters trust and dispels distrust, and it ensures proper attention can be given to the quality and accuracy of output.
That could be as simple as, “FYI, I coded this with help from ChatGPT.” You might add a step in your workflows or processes to review and discuss any work completed or affected by AI.
Practice and Highlight What You Prefer
Adhere to the practices you want others to follow. If you don’t yet have a policy, practice the behaviors you hope make it into the policy. When you demonstrate behaviors, it allows others to see and experience the consequences that follow. And the more you normalize those behaviors, the easier it becomes for others to adopt.
Share stories and highlight wins that demonstrate those behaviors. Avoid overemphasizing the results of the tech in these examples, but focus on how people have used it—ie., prioritize behaviors over outcomes.
Confront with Curiosity
If you think AI is being used in a way that undermines competence or trust, address it—but avoid leading with accusations. First, confirm the facts of the situation, then share your perspective, then ask for others’.
For example, “Was AI used to create this bid? I’m concerned some of our clients won’t appreciate this. I also know other team members are concerned by this. I think we need to revise our practice, but I’d like to understand how you see it.” Be patient with yourself and others as you adapt to the new technology and workplace disruptions.
While AI presents new challenges to our social lives, they are also familiar. Every new technological development has its proponents and opponents.
One hundred and fifty years ago, the invention of the telephone was met with tensions.
Decades before that, Samuel Morse, the inventor of the telegraph, warned, “This mode of instantaneous communication must inevitably become an instrument of immense power, to be wielded for good or for evil, as it shall be properly or improperly directed.”
And millennia before him Socrates opposed the practice of writing, arguing it would diminish dialogue, authenticity, and critical thinking.
I don’t mean to downplay the potential impact of AI. I, too, have concerns. My point is that this is who we are, and this is what we do.
The good news is people can’t sustainably endure a lack of trust. If that’s what AI-generated work fosters, it will come out.
I hope you find a way to preserve trust in your AI-mediated interactions—for the health of your relationships and workplace. What you do will influence norms, and may even influence future laws. There’s work for all of us to do, no matter our preferences. Keep talking.
This was valuable insight to the issue of AI.
I’ve always taken time to craft emails and documents to ensure that all needed information is in them. I worry that people reading AI “summaries” are not going to actually read the content of the documents which provides a deeper insight.
I worry that we are dealing with more information but in a much less thoughtful and careful way than we did previously.
Very useful solutions, Ryan. Even if only a couple of them are implimented, it will be a very positive step forward in starting to set norms for accountability and transparency.
Thanks!
Hey Ryan, I appreciate your thoughts and insights, and dealing with one of the new challenges of this generation. I love the emphasis you put on curiosity, as well as the foundation of trust that leads to authenticity. FYI—Grammarly assisted with some of my poor grammar. I always enjoy your perspective. I hope you have a great day.
Thanks, Stacy.
A helpful and thoughtful collection of ideas. Thanks, Ryan.
I am currently working on a project where our consultant partner uses AI to generate minutes from conference calls. When the minutes are sent, we are requested to carefully read the summary and provide corrections/feedback due to the AI usage.
Someone once told me a story about a teacher they had in the 1970s who warned that calculators would stop people from learning math. Fast forward to today—calculators are practically obsolete, and math education has evolved.
I also remember when “computer operator” was a job title. Now, we’re all computer operators. The lesson? Technology doesn’t replace us—it reshapes us.
If you don’t embrace AI, you risk being left behind. If writing isn’t your strong suit, why wouldn’t you use a resource that helps you communicate more effectively? We use tools every day at work to make us better and more efficient—AI is no different.
P.S. I used Grammarly to check this post. So… is Grammarly bad too? 😉
I don’t see a connection between using AI to help write a message and being disingenuous. Those are two different issues. There’s nothing disingenuous about seeking help, whether from AI, an editor, or any other resource, to communicate more clearly or effectively than one might on their own.