top of page

SPOKE: GENERATIVE AI USABILITY


Audience: Business users, Project managers and Change and Adoption roles.


Expanding usability by narrowing the focus

Generative AI services are becoming more subject-domain focused and outcome specific, by the month. There are a lot of new entrants taking advantage of the first-mover mistakes, which only shows that we are still at the beginning of this revolution in tools and working practices.


The new entrants are showing massive improvements in:

  • error reduction (reducing hallucinations) and

  • increasing relevancy (providing accurate, deep topic content)


...by narrowing the focus of their service i.e. being domain specific.

Engineering drawings with pencil and measuring device

Offering high-value business-oriented support, AI options like Claude (https://claude.ai/) and Perplexity (https://www.perplexity.ai/) introduce specialisation and support unlike that provided with general tools and services like Copilot.


Differentiation


The real impact is in differentiating between basic tone and quality when re-drafting the response, which Copilot tools etc are becoming very adept with in specific app context e.g. Microsoft Teams + Copilot do an excellent job at meeting review, condensing and user-consumption for hectic roles; vs. focused services like Perplexity.ai which provides research and learning focused output which provides evaluation, assessment, resources and emphasis for user-query.


AI for content generation has evolved from a strong technology foundation; a lot of the services have an emphasis on code-generation and developer support, or analytics and decision making which does not always lend itself to a business-based outcome. So we have moved our Generative AI adoption to straddle the change in context, and make sure we don't become myopic with the best-use-of-tools.


Impact is tangible

What does this mean for ongoing assessment and adoption of AI services?


The:

  • Evaluation of targeted Generative AI services becomes a necessity,

  • A quick evaluation model with standard criteria for scoring is a critical requirement,

  • Internal (specific) use-cases are a pre-requisite when putting the tools through their paces

  • Generic productivity AI (like Copilot or Gemini) have limited use, but are useful

  • Understanding the focus of the tool being used is important e.g. Copilot in Outlook, is not the same as in Excel etc, and

  • Using specialised services will be a must for industry, process or role-specific needs


The net result for organisation is that AI adoption will be using a suite of services, and not a single provider ...depending on your needs.


What does this mean for Spoke's choice of tools?


We will be expanding our use of AI tools for productivity when assessing the use for specific business processes and user scenarios. We will continue to address common questions and support needs against output from:


  • ChatGPT 4.o

  • Copilot

  • Gemini


Now our final output process will also review quality of topic output by specialist, against equivalent output from:


  • Claude

  • Perplexity


We will continue to include identifiers and disclaimer on content produced using AI assistance, but ultimately it is our people who are asking the question and refining the result - and making the assessment.


Want to know what we know? Give us a call!

For the best experience for your users, learn how to take Generative AI into day-to-day office productivity, or just learn some of the tricks-of-the-trade? Email us at hi@timewespoke.com


About the author: Jonathan Stuckey

댓글


bottom of page