Can AI write your papers?

Can AI write your papers?

There is much talk about how artificial intelligence (AI) can write for us.

Nikki Gemmell wrote in The Australian newspaper that ‘We scribblers and hacks are staring at the abyss in terms of the chatbot future roaring at us’.

Professional copywriter Leanne Shelton lamented its impact on her business. She expects her copywriting business to take a 35 percent hit this year thanks to OpenAI releasing ChatGPT last November.

I am seeing clients experiment with a range of AI tools to help with their work too.

Yet, like Nikki Gemmel, I am not concerned about AI taking my job.

AI can help the writing process and will stretch us to think harder and better but is not (yet, at least) a match for human insight.

Let me explain why.

  1. AI can’t make a judgement call
  2. AI relies on humans asking really good questions
  3. AI can’t explain how it arrived at its answer
  4. AI’s writing ability is surprisingly poor
  5. AI is inherently biased

Let me unpack each of these further.

 

AI can’t make a judgement call

Even when organisations (eventually) set up their private AI instance AI can only offer limited help. This is so even after proprietary data is fed into it and appropriate access permissions are set up. 

Let’s imagine that we feed the past decade’s board and senior leadership team papers into a proprietary database. We then add an AI engine on top. Leaders and board members could enter queries such as: ‘What is our company’s data security strategy’. The AI engine would then ‘read’ all of its material and summarise it to explain what the papers say about our company’s data security strategy. That is useful as far as it goes.

But what if we asked it: ‘How could we improve our data security strategy?’. Again it would summarise what the papers in its database say about the potential risks inherent in our current strategies. Again, useful as far as it goes.

Assuming the information in the papers is both accurate and complete, the summary may be helpful. I also assume, but don’t know, if it would place the strategy at a point in time or give all the information equal weighting. For example, a five year old data security strategy would be out of date. Would it qualify the information from that strategy as being from five years ago, or merge it with all the other data security items? Would it give these equal weighting? I am not sure, but for this kind of information to be useful we would need to know.

The limitations become even more obvious when we ask the question that we really need an answer to. What would it say if we asked it: ‘What is the right data security strategy for our company in today’s context?’

This is where the human comes in. Opining on what the ‘right strategy’ for a specific company is relies on judgement. So far at least, AI doesn’t have the ability to make a judgement call.

 

AI relies on humans asking really good questions

AI can only answer the questions we ask using the data it has access to. If we ask the wrong question, we will get the wrong answer.

In my experience, asking the right question is a major part of the challenge. 

So even accounting for all of our limitations, humans are at an advantage here. We can interpret the questions we are asked, which can be very useful.

If I ask my team to answer a specific question, and they realise I am off base, they can answer the question I asked but also provide me with what I really need.

They can do this because they understand the context in which I operate, which an AI tool does not.

 

AI can’t explain how it arrived at it answer

While it is fun to ask these bots all sorts of questions to see how they answer, they can’t explain their reasoning. This matters if, for example, we need to audit something.

Imagine if you reported to a regulator that customer complaints for a product like a credit card fell by 20 percent during 2023. The regulator will ask you to provide your evidence to have confidence that this is true.

In the current world you can unpack the data feed. You can explain where the data was collected and when, how it fed into a dashboard that generated the result.   

AI doesn’t allow you to do this, it just asserts what it found using its own hidden processes.

 

AI’s writing ability is surprisingly poor

I put this to the test recently in a conversation with a client. Brooke had been playing with ChatGPT to see if it could help her write a risk memo on non-lending risk acceptance in digital processes.

The result was both unhelpful and hard to read. It identified that operational, cyber and compliance risk needed to be considered. While the information was true, Brooke already knew this and the output lacked context.

As a test, we put the response through my favourite writing tool, the Hemingway Editor. This involved copy-pasting the text from ChatGPT into Hemingway, which then evaluated the writing quality.

It assessed the quality was poor and gave a reading age of Grade 14.  That means it was written at university level. It classified 13 out of the 20 sentences as very hard to read.

You might not think is a problem given many people reading risk reports are university graduates.  It is, however, well above the grade 8 that I recommend for my clients to ensure fast and easy reading for busy executives. In contrast, this article scores at Grade 7.

We then asked it to improve the language of its original draft and re-tested with Hemingway. The new draft came in with a reading age of Grade 9, which was a significant improvement if we can ignore that the content was unhelpful.

I have repeated the test and had similar results.

 

AI is inherently biased

This is where the discussion gets really interesting. I have asked Chat GPT and Google’s equivalent, Bard, to provide me with information about topics that interest me.

I find it is useful when asking for facts. For example, which podcasts discuss board paper writing, or perhaps what art schools offer weekend life drawing classes in my city. The tool provides a tidy summary that is easier than hunting through links provided by Google or Bing.

I worry about its responses that include opinion, however. I had some fun and asked some personal questions to see what it would do.

For example: ‘How does the moon affect women’s health?’. Chat GPT claims the moon doesn’t affect women’s health. In contrast Bard described this as a contested area and offered a list of areas that are currently being researched. In this instance, the Bard answer was more accurate and more helpful.

In contrast, when asking about sensitive topics the answers were both contradictory and troubling. Both Bard and ChatGPT have strong views about topics such climate change and the move to electric vehicles among other things.

Both began by explaining that they were AI tools that could not offer opinion before doing just that.

Given AI is a tool coded by humans, those humans influence how it works and the results it gives. We need to be very aware of this and evaluate any results we receive accordingly.

My conclusion is that although AI is a fun tool to play with and can be useful for finding information, it needs to be used with care. It won’t replace human judgement any time soon. It will, however, push us to get better. We need to critically evaluate anything it ‘spits out’ and lift our own game so we are adding real value not just regurgitating facts.

 I hope that helps.

Cheers, 

Davina

E8 – Adam Bennett – Communicating during great change

E8 – Adam Bennett – Communicating during great change

Cutting Through

Helping experts
engage ‘outsiders'
in complex ideas

Major Change Communication Episode

 

Leading a major change effort while navigating almost polar opposite expectations of a board and the employees is no small task.

This week's Cutting Through guest Adam Bennett shares his experiences leading such a challenge as the first post-privatisation CEO of the NSW Land Registry Services.

Adam transformed a previously largely paper-based government department into a lean and modern digitised and customer-oriented team. We talked about how to

  • Engage the team as the CEO on day 1 when about half of them had been picketing outside parliament against the organisation's privatisation
  • Think about both the mindset and mechanics of the first 90 days and beyond
  • Bring humility and honesty to appeal to the team's adulthood (or, in other words, how to simultaneously avoid sugar coating what's coming while not creating dissent).

I thoroughly enjoyed this conversation as Adam shared deep leadership and communication wisdom.

Timestamps 

00:38 – Get to know Adam 

06:18 – Engaging team as CEO on day 1 

14:22 – Approaching the first 90 days as CEO  

23:16 – Importance of both employee and customer satisfaction 

32:36 – Handling the heat of tough discussions or unpopular decisions 

36:40 – Maintaining a positive workplace environment amidst great change 

47:26 – Adam’s final two tips: Mindset and Mechanics 

 

 Resources

  1. Download Adam's whitepaper on Great Change below.
  2. Connect with Adam on LinkedIn

 

E11 – New Board Paper Writing books: Elevate and Engage

Cutting ThroughHelping experts engage 'outsiders' in complex ideasBoard Paper Writing episode In this board paper writing podcast, Dan and I talk about my new books, Elevate and Engage. Together, we unpack our deep understanding of the paper-writing process for both...

E10 – Alexa Chilcutt – How to make your presentation butterflies fly in formation

Cutting ThroughHelping experts engage 'outsiders' in complex ideas

E9 – Lisa Carlin – Practical strategies and case studies to help you turbo charge your transformation

Cutting ThroughHelping experts engage 'outsiders' in complex ideasTransformation Episode Seventy plus percent of transformations fail says McKinsey and Harvard Business School and yet Lisa Carlin has a 96 percent strike rate. What’s her secret? Lisa shares her top...

No Results Found

The page you requested could not be found. Try refining your search, or use the navigation above to locate the post.

Know when each episode drops

Get in Touch

12 + 6 =

Using peer pressure to skittle dissent rather than doing it yourself

Using peer pressure to skittle dissent rather than doing it yourself

How often have you presented a new capability or idea knowing that some stakeholders are not in your corner?

It is rare to have all your stakeholders championing your success so a common challenge to address.

During a coaching session this week, a client shared his clever hack which I thought would be useful to you also.

When showcasing a new product or strategy Fred leverages his winners to persuade his losers so he doesn’t have to. Let me explain the situation and then the solution.

The situation …

Imagine you are ready to showcase a new platform that your team has prioritised developing over the past six months. This platform underpins features for a host of other use cases.

In prioritising this platform, other projects have been necessarily delayed. This was the right decision given the risk of rework on other projects if they were built without leveraging this new foundational platform.

So, in the room you have winners and losers: Those who are excited about the prospect of the new features they can now access and those who have been delayed.

The solution …

Fred said that he deliberately invites both winners and losers to the showcase so long as the losers are not overwhelming in number or volume. This has a number of benefits. It

  • helps the losers have a better sense of perspective. The winners help them see that they or their own priorities have not been ignored, but rather ‘taken one for the greater good’.
  • means the losers are persuaded by their peers, rather than by him. Their peers are likely to have more credibility as Fred is the one who made the decision they didn’t like.
  • reduces the need for him to go one by one to showcase his product or strategy to either group.

I thought that was a clever hack and that it might help you also.

More next week.

 

Cheers,

Davina

 

Write emails that are easy to action

Write emails that are easy to action

Emails are a constant challenge.

They are ‘everywhere' in our day to day work and yet often seem too small a communication to invest heavily in.

To help with this challenge I have prepared a short video tutorial offering four techniques for writing emails that are easy to action:

  1. Have an obvious purpose
  2. Grab attention with the subject line
  3. Highlight one visible message
  4. Visualise the message hierarchy

It's short, in keeping with the medium, but offers specific examples to bring these ideas to life.

You can access this free tutorial here >>

I hope that helps. More next week.

Davina

RELATED POSTS

PRESENTED BY DAVINA STANLEY

I love what I do.

I help senior leaders and their teams prepare high-quality papers and presentations in a fraction of the time.

This involves 'nailing' the message that will quickly engage decision makers in the required outcome.

I leverage 25+ years' experience including

  • learning structured thinking techniques at McKinsey in Hong Kong in the mid 1990s before coaching and training their teams globally as a freelancer for a further 15 years
  • being approved to teach the Pyramid Principle by Barbara Minto in 2009
  • helping CEOs, C-suite leaders and their reports deeply understand their stakeholder needs and communicate accordingly
  • seeing leaders cut the number of times they review major papers by ~30% and teams cut the amount of time they take to prepare major papers by ~20%*
  • watching senior meetings focus on substantive discussions and better decisions rather than trying to clarify the issue

My approach helps anyone who needs to engage senior leaders and Boards.

Recent clients include 7Eleven, KPMG, Mercer, Meta, Woolworths.

Learn more at www.clarityfirstprogram.com

 

(*) Numbers are based on 2023 client benchmarking results.

How to hit the ground running in a big new role

How to hit the ground running in a big new role

Have you ever wondered how senior people hit the ground running in a new role?

I recently spoke with Cerise Uden about her strategies for doing that on the Friday before she started a new senior program manager role.

At the simplest level, we talked about preparation.

It got really interesting when we got into the detail, though.

Cerise shared her simple yet specific approach for quickly engaging and delivering for senior decison makers. We discussed how to

  1. Work out who to really get in front of early on (and when to do it)
  2. Fill any knowledge gaps you might have, particularly if the role covers new areas such as AI
  3. Nail down precisely what you need to deliver and to whom

You’ll find the episode on your favourite player and on our website here.

I hope that helps. More next week.

Kind regards,
Davina

RELATED POSTS

PRESENTED BY DAVINA STANLEY

I love what I do.

I help senior leaders and their teams prepare high-quality papers and presentations in a fraction of the time.

This involves 'nailing' the message that will quickly engage decision makers in the required outcome.

I leverage 25+ years' experience including

  • learning structured thinking techniques at McKinsey in Hong Kong in the mid 1990s before coaching and training their teams globally as a freelancer for a further 15 years
  • being approved to teach the Pyramid Principle by Barbara Minto in 2009
  • helping CEOs, C-suite leaders and their reports deeply understand their stakeholder needs and communicate accordingly
  • seeing leaders cut the number of times they review major papers by ~30% and teams cut the amount of time they take to prepare major papers by ~20%*
  • watching senior meetings focus on substantive discussions and better decisions rather than trying to clarify the issue

My approach helps anyone who needs to engage senior leaders and Boards.

Recent clients include 7Eleven, KPMG, Mercer, Meta, Woolworths.

Learn more at www.clarityfirstprogram.com

 

(*) Numbers are based on 2023 client benchmarking results.