AI in Your LMS: Built-In or Bolted On?

Every LMS vendor has "AI" now, it's on every homepage, every product brochure, every slide deck. I saw a LinkedIn post recently claiming their platform generates training content "in seconds through AI" and I won't lie, that made me laugh. A full quality course in seconds is just not going to happen.
Not because AI content generation isn't real or useful, it absolutely is, but if you've actually used it — genuinely used it, not just watched a demo — you know that "seconds" is doing a lot of heavy lifting in that sentence. Minutes, yes, and hours saved compared to doing it manually, absolutely, but seconds? That's just marketing doing what marketing does.
And that's the problem with "AI-powered" as a label right now, it's become table stakes. Every SaaS company, whether it's an LMS, an HRIS, a CRM or a project management tool, has to have something AI in the portfolio or they're immediately perceived as legacy, behind the curve, old school so everyone slaps an AI badge on the marketing page and calls it done.
The question worth asking isn't "do you have AI?" It's "what is the AI actually doing, and where does it live?"
The Bolt-On Problem
If a vendor's AI only lives in one place — a course builder, a quiz generator, a search bar — it's bolted on, added to address a gap, to answer an RFP question or just to tick a box. It doesn't know anything about your learners, not their skills, their job roles, what they've already completed or where the gaps are, it just does its one thing in isolation.
That's not necessarily useless, a decent AI course builder that saves your L&D team four hours on a module is genuinely valuable, but it's a point solution and it's a feature wearing an AI badge rather than intelligence woven through the platform.
Genuine AI integration means the AI has access to everything: learner profiles, job roles, skills data, completion history, course content, department structures, and it can draw on all of that to answer questions, surface insights and make recommendations that are actually relevant to the person asking. Not generic, not guesswork, contextually aware.
The difference matters more than vendors want you to realise.
What We Built, And Why We Were Honest With Ourselves First
"AI-powered" is a phrase I've thought hard about using for Learnient, because I think it's been so thoroughly abused that it almost means nothing anymore so let me tell you exactly what we've built and you can decide for yourself whether it deserves the label.
When we started building Learnient, AI wasn't something we decided to add later, it was part of the product strategy before we wrote the first line of code. The question we kept asking was: where can AI actually make an L&D manager's life easier, where does it genuinely remove friction, surface information faster, save real time? Not where can we put an AI badge, but where does it literally help?
That thinking shaped everything, including how the data is structured, how the system stores information and how it makes that information available. We call our AI engine Lenni and Lenni isn't just a chatbot sitting in a corner of the platform. Lenni has access to everything — every learner profile, every job role, every course, every skill, every completion record — and can draw on all of it to answer questions and surface what matters.
Whether that makes us "AI-powered" in the truest sense? I'll let you decide.
Natural Language Reporting: The One That Surprises People Most
Here is an example of what this looks like in practice.
You're an L&D manager and you need to know who in your Manchester office hasn't completed GDPR training. In most platforms you'd build a report: select your data source, add your filters, pick your fields, choose your format, run it, which isn't terrible but it takes time and requires you to know your way around the reporting interface.
With Lenni you just ask. Type "show me everyone in the Manchester office who hasn't completed GDPR training" and you get your answer.
That might sound like a gimmick, and lots of vendors say "natural language reporting" when what they really mean is a simplified query builder with a text box on top, but what makes Lenni different is the intent engine underneath it.
The biggest challenge with natural language isn't understanding a perfectly phrased query, it's understanding that "who hasn't done their GDPR?" and "show me GDPR non-completions" and "GDPR — who's behind?" are all asking for the same thing. People don't talk like developers, they use shorthand, make spelling mistakes and ask follow-up questions that only make sense in context so Lenni is trained to understand what someone means, not just what they typed.
Lenni also keeps context across a conversation so if you ask "show me all users broken down by department" and get your chart you can then just say "sort descending" and Lenni knows you mean the same chart sorted differently, you don't have to repeat yourself. Ask "who is Bob's manager?" then "who else is in the same job?" and Lenni understands both questions are still about Bob, even though you haven't mentioned his name again. That's the difference between a query tool and something that actually understands how people work.
TIP: When evaluating an LMS's AI reporting, don't just ask "can it do natural language queries?" Ask the vendor to show you a follow-up question and change one element of a report without restating the whole query. If it can't hold context it's more limited than it looks in the demo.
Also worth knowing: Lenni has a confidence check built in so if the intent behind a question is ambiguous enough that it might return the wrong data, Lenni asks a clarifying question rather than guessing. Nobody wants a reporting engine that confidently returns the wrong answer and that's the AI hallucination problem in a context where it genuinely matters. You present numbers to your board, nobody checks, and it turns out the AI made them up — I've seen that happen (not in an LMS, but the principle is identical) and the guardrails matter.
Creating Content From Your Own Documents
Every LMS with a course builder is adding AI generation now, you prompt it and it creates a course, and that's fine and useful. But the most interesting capability — certainly for UK companies in regulated industries — is generating content from your own documents.
Here's how it works. You're creating a course on health and safety and you could type a prompt and Lenni will create something reasonably generic, or you click the paperclip, upload your actual health and safety policy — the document your business uses, with your specific processes, your reporting procedures, your storage requirements — and Lenni builds the course from that.
The difference matters enormously. Generic AI content on health and safety will cover the basics correctly but it won't know that near-miss incidents at your company get logged in the book in the second drawer at reception, it won't know your specific supplier procedures, your particular building layout or your internal escalation process. Your policy document knows all of that and training built from your policy is literally training built for your business rather than a hypothetical business in your industry.
We support PDF, Word, text files, PowerPoint and Google Docs equivalents so in practice you can upload almost anything your HR or compliance team has already written.
One thing I want to be clear about: Lenni automatically puts AI-generated content into draft mode and it doesn't go live until a human reviews it, which is intentional. AI can make mistakes, it can misread a document, summarise something in a way that loses an important nuance or create an assessment question that's technically accurate but practically misleading, and in low-stakes training that's annoying but in food safety, health and safety or any regulated industry where getting it wrong could harm someone it's unacceptable. Your health and safety manager needs to check that content before it goes anywhere near your employees and that's just the right way to do it.
TIP: When a vendor implies you can go from upload to published without a review step ask them what happens when the AI gets something wrong, because the answer will tell you a lot about how seriously they take the responsibility of training content.
Skills-Based Recommendations: What We Do Now and Where We're Going
Lenni's recommendations work in layers. When a learner asks what training they should do the first thing Lenni looks at is mandatory training — anything the business has assigned to that person that hasn't been completed — because if the business has said you need to do this, everything else is secondary.
Once mandatory training is handled Lenni looks at skills. It knows the skills assigned to the user, the skills required for their job role and the skills developed by each course in the library so from that it builds a gap analysis: here are the skills your job requires that you don't yet have, and here is the training most likely to address those gaps. It won't recommend something you've already completed and it won't recommend training that doesn't address your actual gaps.
There's also a third mode worth mentioning, which is when a learner comes in and says "I need help with communication skills" or "can you recommend something on project management?" Lenni doesn't just run a skills analysis, it understands the learner has asked a specific question and goes looking for content that addresses exactly that, even if it's not technically required for their job role, because people want to develop themselves beyond their job description and that matters.
OK so where we genuinely are with this: the skills recommendations work but today they're reactive, a manager has to go in and ask for the analysis or a learner has to log in and ask the question and we haven't yet flipped it round to make it proactive.
What does proactive look like? The admin dashboard surfacing a notification that says you have 25 employees with significant skills gaps in these three areas and here's the training that would address them, or Lenni identifying that a cluster of employees all have certifications expiring in 90 days and rather than just flagging the expiry suggesting you create or assign a refresher course now before it becomes a compliance problem. We already show dashboard tiles for expired training and disengaged learners so the next step is surfacing skills risk at an organisational level rather than just individual level.
We don't have a date to put on that and I'd rather be honest about it than promise something we're not ready to ship, but it's a direction we'd genuinely love input on because the people who know best what an L&D manager needs first thing on a Monday morning are L&D managers.
Nudges and Notifications: Useful, But Not Yet Smart
Our notification system today does what you'd expect: it surfaces expiring training, flags non-completions, alerts on disengaged learners, and the timing is configurable so most of our customers will find it genuinely useful.
Is it AI-driven in any deep sense? Not yet, the rules are set by the admin and the system follows them. The direction we're heading is the difference between "here is a list of people whose training expires next month" and "here are those people, grouped by common training gaps, with a suggested course to assign them" and that's the difference between a notification and a recommendation. We're working towards the latter.
The Honest Answer to "Are You Really AI-Powered?"
Nobody in this space is building their own foundational AI models from scratch, that's an enterprise-scale undertaking that even the biggest LMS vendors aren't doing, and what matters isn't what model sits underneath, it's whether the AI is genuinely woven into the platform or just bolted on to answer an RFP question.
For Learnient we built with AI in mind from the beginning. Lenni has access to the full breadth of data in the system and the intent engine is trained, refined and gets better with use — every time a user gives Lenni a thumbs down on an answer we capture the full context of that conversation, not just the last message but the whole thread leading up to it, so we can understand what went wrong and improve accordingly. That's not a feature we added later.
But here's something I think most vendors won't say: AI isn't a replacement for your people. Lenni can generate a course on health and safety in minutes but your health and safety manager still needs to read it, Lenni can surface skills gaps across your organisation but your L&D team still needs to decide what to do about them, and Lenni can answer questions about your data but you should still sense-check anything that's going to your board. AI does the heavy lifting and your people apply the judgement and honestly that's just how it works.
You're in Control, Not Lenni
Not every company is comfortable with AI accessing their data, some have strict internal policies, some are in industries where data governance is a regulatory requirement and some people simply aren't there yet with AI, and that's completely fine.
So we built Learnient so that you control what Lenni can and can't do, all of it. If you want Lenni to handle everything — analytics, recommendations, content generation, document uploads, the full picture — brilliant, it's there. But if your company policy says AI can't touch your employee data you can turn that off entirely and Lenni won't access it. If you're comfortable using AI to generate content from a prompt but you're not ready to upload internal policy documents you can enable one and disable the other. If you just want Lenni to tell you where in the platform to go to do something, a help assistant and nothing more, that's a valid choice too.
It's your choice and we're not going to force AI on people who aren't ready for it or who have good reasons not to use it. What we will do is make sure that when you are ready — when your policies allow it, when your comfort level gets there — it's already there waiting for you, properly built in.
A lot of the AI conversation in this industry assumes everyone is enthusiastic and ready to go but in reality plenty of L&D managers and HR teams are navigating internal IT policies, GDPR considerations and understandable scepticism from senior stakeholders and we're not here to bulldoze past that.
What To Ask In a Demo
If you're evaluating LMS platforms and AI is on your checklist, here are the questions worth asking:
Ask them to show you a natural language query then ask a follow-up question without restating the original. Does the system hold context or do you have to start again?
Ask what happens when the AI isn't confident it understands what you're asking. Does it guess or does it ask?
Ask where the AI gets its information from. Does it draw on learner profiles, job roles, skills data, completion history or does it only know about course content?
Ask to see AI-generated content placed into a draft workflow, not published directly. If it goes straight to live, ask why.
Ask what formats they can generate training content from. If the answer is only text prompts and not your own documents that's a meaningful gap for any organisation with existing policies and procedures.
Ask whether you can control AI access granularly, not just "can you turn AI off?" but can you turn off specific capabilities independently? Analytics but not content generation? Content generation but not document uploads? If the answer is all or nothing that's worth knowing before you sign.
And if a vendor tells you their AI generates quality training content in seconds? Ask for a live demo, not a pre-recorded one.
A Final Note
We named our AI engine Lenni, not as a marketing gimmick but because we wanted Lenni to feel like part of the platform — something with personality, something that's actually there to help rather than just a toolbar button dressed up as intelligence. You can ask Lenni for a training report or ask Lenni what their favourite food is and they'll answer both.
What we've built so far is something we're proud of and what we're building next is shaped by what our customers tell us they need. If you've got opinions on where AI should go in a platform like this, we genuinely want to hear them.
Ready to transform your employee training?
Book a demo and see how Learnient can help you build a training program your employees will actually love.
Book a demo