AI Trust Risk and Security Management (AI TRiSM) is possible by applying cross-disciplined practices and methodologies to AI models and supporting data, and adopting tool sets that support these practices.
We just published an update to our AI TRiSM Market Guide Market Guide for AI Trust, Risk and Security Management that lays out a framework for managing model trust, risk and security, and lists sample vendors in the niche software categories that support that framework.
AI TRiSM methods and tools work with any model, ranging from open-source LLM models like ChatGPT or homegrown enterprise models that use a variety of AI techniques. Of course, with open-source models, there are some differences – for example with regards to protecting enterprise training data on shared infrastructure used to update the model for enterprise use cases.
Additionally, enterprises have no discrete ability to govern open- source models using ModelOps tools we write about in the Market Guide.
But explainability, model monitoring, and AI application security tools can all be used on any open source or proprietary model to achieve the trustworthiness and reliability enterprise users need. In fact, they should be used on third party models and products