ISMS Copilot

We’re doing our best to ensure the assistants provide correct guidance. Our goal is that when you ask questions like “which ISO control deals with secure authentication?”, it provides you with the latest control references.

This is not an easy task, as it’s in the AI assistant’s nature to “hallucinate”, i.e. commit mistakes. It can also provide the right answer, i.e. correctly remembering what it learned, but having learned the wrong information.

Our role at ISMS Copilot is making sure our assistants provide accurate and reliable information security management solutions.

Here are the controls we implemented to make it possible:

Training process

Training our assistants deeper than default LLM models, with real knowledge of implementing security compliance frameworks.

Data validation

We employ rigorous data validation processes to ensure the accuracy of information used in our ISMS Copilot chatbots. AI trainers are topic-experts.

Regular updates

Our knowledge base is continuously updated to reflect the latest industry standards, regulations, and best practices in information security management.

User feedback loop

We actively encourage and incorporate user feedback to improve the accuracy of our systems continuously.

We are committed to maintaining the highest standards of accuracy in all our operations. If you have any questions or concerns about the accuracy of our services, please don't hesitate to contact us through the Trust Center.

Despite this measure, we’re aware it’s a long-term walk and our assistants can still commit mistakes. An example is mapping of ISO 27001:2013 with ISO 27001:2022 controls, that particularly challenge the assistants. While they get it right most of the time, they can still experience hallucinations sometimes. We’re aware of it and work on training programs tackling current limitations.