ITSM concerns when integrating new AI services

Ariel Gesto August 7, 2024
- 7 min read

*This article was originally published in Help Net Security.

Let’s talk about a couple of recent horror stories.

Late last year, a Chevrolet dealership deployed a chatbot powered by a large language model (LLM) on their homepage. This LLM, trained with detailed specifications of Chevrolet vehicles, was intended to respond only to questions about Chevrolet cars.

However, users quickly found a way to circumvent these limitations: through a series of leading questions that fell increasingly outside the intended range of the chatbot’s answers, they ended up prompting the chatbot to recommend Tesla vehicles instead. Soon, the bot was being manipulated into writing code, and even offering to sell cars for one dollar.

A more alarming incident involved Microsoft’s Co-Pilot, an AI tool designed to assist in writing code. Due to the phenomenon of AI hallucinations – where the AI generates plausible yet false information – the tool suggested a non-existent library. Seizing this opportunity, a developer created a library by that name, embedded malware within it, and uploaded it to GitHub. Within four days, the malicious library had 100,000 downloads.

These failures should give pause to any business leader looking to integrate AI into their operations. They’re a reminder to organizations about the complexities and risks associated with integrating this shiny, new technology into vital processes.

4 key considerations for AI integration in ITSM

Before jumping into adopting AI for IT Service Management (ITSM), keep in mind the following considerations.

1. Managing proprietary information

AI models require vast datasets to function optimally, but unrestricted data usage can lead to significant security breaches. Companies must ensure that only relevant, non-sensitive data is fed into AI systems to prevent unauthorized access and misuse.

2. Ensuring data integrity – “Garbage in, garbage out”

The quality of the data used to train AI models directly impacts their performance. Organizations need to implement rigorous data sanitization processes to ensure that the data used is accurate, reliable, and free from errors. This step is critical in producing trustworthy AI outputs.

3. Establishing stringent access controls

AI models do not inherently understand access control, making it essential to define clear boundaries on who can access specific data.

It is widely agreed that multiple LLMs, each designed for a specific purpose, should be employed. The key to establishing stringent access controls lies in feeding each LLM only the information that its users should consume.

This approach eliminates the concept of a generalist LLM fed with all the company’s information, thereby ensuring that access to data is properly restricted and aligned with user roles and responsibilities.

4. Rigorous verification of AI outputs

AI systems can produce outputs that appear plausible but are fundamentally incorrect. These AI hallucinations underscore the need for rigorous verification processes.

Regular audits and checks should be in place to ensure that AI-generated outputs are accurate and reliable. This practice is crucial in maintaining trust and credibility in AI applications.

5 best practices for AI integration

1. Flexibly leveraging multiple AI providers

Adopting an agnostic AI strategy is essential for enhancing ITSM processes. Instead of committing to a single AI provider, organizations should utilize a flexible interface that allows seamless integration with various AI models.

For example, leveraging Microsoft’s Azure for its robust GPT models to handle customer service inquiries and Google’s Vertex AI for streamlining internal IT workflows can be highly effective.

This approach ensures that businesses are not tied to a single provider, allowing them to switch to the best available models as AI technology evolves. By maintaining this flexibility, organizations can continually access state-of-the-art capabilities without the risk of vendor lock-in.

This strategy not only optimizes performance but also ensures that the AI infrastructure remains adaptable and future-proof, ready to incorporate new advancements as they emerge.

2. Leverage pre-trained models

Training AI models from scratch requires significant time, money, and computing power. Given these substantial demands, leveraging pre-trained models has become a necessity for most organizations.

By using models that have already been trained on vast datasets, companies can significantly reduce costs and implementation time. Moreover, pre-trained models often come with advanced capabilities honed by leading AI researchers, offering cutting-edge performance right “out of the box”.

But fine-tuning these pre-trained models with proprietary data is imperative as it will allow organizations to tailor the AI’s functionality to their specific needs. This approach ensures that while the underlying model benefits from extensive training, it is also relevant and customized to the particular challenges and requirements of the business. It’s like having a high-end sports car that you can tweak to perform perfectly on your unique track.

3. Establish clear access boundaries

To prevent unauthorized access to sensitive information, define who can access what data and under what circumstances. For instance, a marketing representative should have access to promotional materials and product information, but not to sensitive financial data or internal HR records.

By feeding AI models only the data necessary for the user’s specific role, organizations can mitigate risks and ensure that sensitive information remains secure. This practice not only protects the data but also aligns with compliance requirements and best practices for data governance. It’s about creating a fortress around your valuable information, ensuring that only those with the right keys can enter.

4. Implement strong NDAs and data protection agreements

When utilizing external AI services, safeguarding proprietary information is crucial. Implementing robust Non-Disclosure Agreements (NDAs) and data protection agreements ensures that your data remains confidential and is not misused. These agreements should clearly stipulate that the data provided will not be used for retraining or any purposes other than those explicitly agreed upon.

This legal framework provides a safety net, protecting your organization from potential misuse of data by third-party providers. It’s essential to be diligent in this aspect, as the repercussions of data leakage can be severe, ranging from competitive disadvantages to legal liabilities.

5. Combining AIaaS and self-hosted models for enhanced security

To maintain strict control over sensitive data while leveraging the benefits of AI, organizations should adopt a hybrid approach that combines AI-as-a-Service (AIaaS) with self-hosted models.

For tasks involving confidential information, such as financial analysis and risk assessment, deploying self-hosted AI models ensures data security and control. Meanwhile, utilizing AIaaS providers like AWS for less sensitive tasks, such as predictive maintenance and routine IT support, allows organizations to benefit from the scalability and advanced features offered by cloud-based AI services.

This hybrid strategy ensures that sensitive data remains secure within the organization’s infrastructure while taking advantage of the innovation and efficiency provided by AIaaS for other operations.

By balancing these approaches, businesses can optimize their AI integration, maintaining high security standards without sacrificing the flexibility and performance enhancements that cloud-based AI solutions offer.

In conclusion

Like any new technology, integrating AI into ITSM processes presents opportunities and challenges. By leveraging pre-trained models, establishing clear access boundaries, implementing strong NDAs, and regularly monitoring and auditing AI systems, organizations can navigate these complexities. These best practices not only enhance the functionality and efficiency of AI integrations but also ensure that the organization’s data remains secure and compliant with regulatory standards.

In our rush to introduce these new technologies, we’ll sometimes trip and fall. By learning from those real-world examples, your organization can avoid similar pitfalls, ensuring that AI becomes a powerful ally in your IT operations, and that you don’t end up selling Chevys for a dollar.

Read other articles like this : AI