top of page

AGENTIC AI: SHAREPOINT KNOWLEDGE AGENT REVIEW

  • Oct 20
  • 7 min read

Updated: Nov 4


Audience: Solution designer, Information Manager, IT Operations

Reading: long-read, analysis and recommendations


Part of the approach to deploying the new SharePoint Knowledge Agent (Preview) from Microsoft is understanding its scope and intent.


logo icons for copilot, 365, sharepoint and purview
The Knowledge Agent combined with ECM and Governance services

Unlike the generic Copilot snap-in for Office or the BizChat/Copilot Search approach to content generation, the Knowledge Agent is specifically targeted for a particular job. It's like an assistant with some basic information management knowledge but no practical experience.


Purportedly, the Agent should help organize, enrich, and maintain your organization’s digital files. This makes information easier to find, more reliable, and ready for productivity tools like Copilot. It simplifies site management by flagging outdated content, suggesting improvements, and allowing users to create automation or get answers using everyday language.


Deploying Knowledge Agent on SharePoint


What you need to know about having the Knowledge Agent available in your tenancy is:


  • It is in preview, so expect changes to features in the actual release.

  • Enablement requires PowerShell, and the default command needs to have controls injected.

  • Deployment ignores designated opt-out sites from existing Copilot controls in SharePoint Admin.

  • Application of recommendations will change your library setup without roll-back.

  • There is a limited set of changes for metadata and views offered over libraries.

  • Default "lifecycle" automation for pages is potentially dangerous and may lead to loss of data.

  • There is a danger with all user access messing up information architecture, design, and search options.

  • It uses Syntex document processing and Autofill behind the scenes.


Checklist for Successful Deployment


Here are key things to ensure when running the Knowledge Agent:


  1. Identify specific roles or people for participation. This is not a general-use AI tool.

  2. Check your internal expectations on what and how to use the agent.

  3. Know where your content is and which sites you absolutely should not include in the agent's testing.

  4. Be explicit about what you are testing—use non-production sites.

  5. Be explicit about what is not being tested. Don't run against content that is already curated.

  6. Ensure all testers are briefed, coached on writing prompts, and have in-depth knowledge of the content.

  7. All testers should know what to capture, how to evaluate, and how to score.


Proof-points


Because this is a preview, we are effectively doing product testing at the same time as proof-of-concept. The potential for a 'Proof of Value' to a business is significant. Basically, you are stress-testing the base functionality of the agent. What we want to understand are:


  1. How useful, consistent, and likely to be accurate are the recommendations?

  2. How useful are the proposed updates to the library content (corpus)? Are they things that users of that data/documents will use?

  3. Will the updates enhance or detract from access and usability of existing information?

  4. Will automation options have a positive or detrimental impact on information and user experience?


Hence, I recommend that Information Management or Business/Process Subject Matter Experts lead the testing.


Do not start testing unless you have key business, operational, or subject experts involved.

Ultimately, we are looking to make decisions on:


  • Usefulness of the recommendations and suggested changes.

  • Appropriate application of the changes in recommendations—scale, locale, structure.

  • Trustworthiness and honesty in testing feedback.


Do You Really Know Your Content?


Current marketing talks about "generation of metadata based on your content." Practically, this means that based on a small subset (up to 20 items max) of documents in the library, the agent will extrapolate potential metadata (schema) and possible choices of labels (attributes).


  1. In a library of 1,000 items, 20 items is 0.02%... and decreasing as the volume goes up. Guidance on the right sample of documents is critical if you want useful recommendations.

  2. Using available properties and metadata, the Agent can recommend possible views for the library, although not necessarily with appropriate naming or targeting for end-users.

  3. On request, the Agent can suggest automation (read: notifications, labelling) and workflow (read: delete), requiring process-specific context and lifecycle management.


The other key area for focus (context) is based on the management of page content and the lifecycle (visibility) of them. This is a potentially dangerous area to allow unfettered AI to run automation, particularly when it may impact regulatory or compliance obligations.


Running the Wizard for Pages


What Does It Actually Do?


When you run the agent as a user (Site Owner), you get two sets of context-based experiences via a floating 'bot' UI in the bottom-right of the screen.


On Site Pages


From the floating button, you can trigger basic AI interactions and new functionality to "Improve this site."


Knowledge agent context menu for publishing pages on the site.
Page management context menu

By that, it means 'retire' (hide from search) pages, delete, or label pages.


In a Library


Most of the options on the Agent context menus are really just re-skinned versions of existing functionality. However, there are new and somewhat useful functions wrapped in with the rest.


Knowledge agent library menu with commands for structuring library.
Library mgmt context menu

New Features


The only new features in the agent were found under the "Improve this site" and "Organise my library" options. Both have some smarts wrapped around entity extraction and analysis, which is then dropped into the "Create column" action.


For example, the "Organize my library" does this through the button generating a pre-scripted prompt. You can update or change the prompt before pressing go, but the built-in agent constraints will limit your options. No matter what value you change "no. of columns" to, you don't get more than four, and the first couple will always include "Description" and "Document type."


The newer functionality uses high-level prompts that have had context and response options limited. This feature uses the Syntex document ingestion and parsing functions to extract entities and base potential recommendations on.


monkeys putting documents onto an industrial conveyor belt
Bulk-processing documents costs peanuts here

Specifically:


The true potential of the Agent is based on the Copilot integration and use of these features. Its recommendations for extraction of metadata and auto-fill of that metadata form the basis for other actions it can perform.


It's worth noting that the cost of SharePoint Agents and Knowledge Agent processing is covered by the Microsoft 365 Copilot license of the users running it, but not for unlicensed users or guests.


Caution: The default Syntex model is a Pay-as-you-go license. While the Agent is in Preview, there is no cost associated with the use of the service. However, Microsoft has not stated if this will be the case post-Production release.

Where Is the Value?


What we want is for our internal, curated content to be accessible and available in useful form. The ongoing investment seems to be what Microsoft is delivering with the Knowledge Agent.


The ability to:


  1. Assess, recommend, and apply appropriate metadata for views, filtering, and automation will significantly improve search as well as Generative AI tools.

  2. Introduce consistent, broadly applied, and available metadata will allow for real-world automation for content reviews, approvals, release, and expiry.

  3. Use the metadata and lifecycle steps to drive a self-cleaning and manageable environment would be ideal.


And this is where you need people, specifically in the decisions: on what to keep, not to keep, archive vs. destroy, and where things do not meet the rules you've set out.


You could always achieve this kind of enrichment and automation, but it required spending a lot of money on add-on services that needed significant upfront work, like Syntex document processing. Ironically, these are all things Microsoft has promised and never delivered in SharePoint, and they are pre-canning Syntex to deliver it.


Investments in the Agent


From early testing, there's a lot that needs more upfront thought than is obvious right now. I would love to see investment around:


  • Making the agent role-based for use, management, and operation, i.e., accessible to delegated roles with the necessary knowledge to guide application.

  • Chaining functionality together to enable scaling and consistency of output.

  • Ability to override prompts with interactive guidance - it's only partially there in Preview.

  • Tying to existing rules or tools in place, e.g., Purview Retention labelling.

  • Optional enforced centralized notification and approval - especially with Autofill, it will cost.

  • Release management and roll-back of deployed columns - previewing options is limited today.

  • More accessible monitoring events for operational support.


Basically, we need manageable options for Agentic development and the ability to run multiple together. This all needs to keep the key roles (read: Information and Knowledge Managers) in control and understand how they work and how to guide them.


Best Results from Testing


For the best results, I recommend ensuring S.M.A.R.T. usable evaluation criteria to assess if the Microsoft Knowledge Agent will be right for your situation. Keep in mind that this is not ready for organization-wide assessment or release yet.


  1. Establish a test tenancy with a useful amount of real-world sample data (cloned sites, perhaps). Use that as a test bed to see what the tool suggests.


  2. Monitor to see if the feature works, but you need subject-expert knowledge and Information/lifecycle knowledge to review the output to be applied.


  3. Limit your Microsoft 365 Copilot and Agent access to reduce potential burden from general end-user access to services that can incur pay-as-you-go billing.


While it is packaged, it is a code solution that is pre-release.


Verdict


It's worth evaluating, but with caution and a structured plan. This tool is to be used by experienced and knowledgeable humans - not all users. As a release, it is both useful and, as usual, just not quite there yet. Be considered in your approach to evaluating this tool (and it is a tool, more than an assistant or true AI Agent).


The suggestions back from the raw Agent (untrained) features were not great, but it is quick and sometimes uncovers things you haven't considered. It does not remove the requirement for a knowledgeable and experienced human in the loop reviewing the output and how it will be used.


CAUTION: As with Microsoft 365 Copilot, you will uncover existing information access and privacy issues. However, you will also incur potential clutter and clean-up requirements post-testing.

That said, the evolution of these things is swift, so keep your eyes open over the coming weeks.


Resources


There are some great articles and honest evaluations, not just marketing:


Disclaimer


Microsoft 365 Copilot was utilized for image generation to minimize effort in QA review. Topic ideation and practical evaluation of the content presented is entirely the author's responsibility. Any errors in content, presentation, etc., all the author's fault.


Want to Know What We Know? Give Us a Call!


Looking for guidance in adopting Generative AI in a robust and useful manner? Or interested in learning how to adopt Generative AI into day-to-day office productivity, or even just learn some of the tricks of the trade? Email us at hi@timewespoke.com


About the author: Jonathan Stuckey

Comments


bottom of page