COMMENTARY: Artificial intelligence (AI) is everywhere. Companies of all sizes and in every market, including MSSPs, are deploying or experimenting with how the technology can improve everything from call center operations to threat intelligence to marketing to quality control on the manufacturing floor. McKinsey estimates that Gen AI could add up to $4.4 trillion to the global economy.However, that opportunity comes with risk. Cybercriminals are already utilizing AI to improve the effectiveness of their attacks. Still, AI opens up companies to other types of vulnerabilities – some of which we are just beginning to recognize and understand.Create a comprehensive view of the potential AI-related risks across use cases and map out options for managing those risks (both technical and non-technical). A cross-functional team should be established for this task to review and validate risk assessments. Implement a governance structure that can include requiring references and fact-checking for AI responses, keeping humans in the loop, and protecting against problematic third-party data usage. Embed the governance structure in an operating model and provide training for end users. An AI steering group should meet regularly to evaluate risks and mitigation strategies. Automate data governance and information management (including archiving and deletion) to help avoid having employees overshare or expose sensitive information. Role-based access and elimination of manual intervention can reduce the risk of human error. Reassess your data backup and recovery capabilities. AI tools like Microsoft CoPilot and others will exponentially increase the volume of data generated across every company. Ensure you have sufficient storage in multiple locations and regular backups to help mitigate against system failures, cyberattacks, and other disasters. This will be critical for managing AI-generated data and ensuring you have sufficient data to train new AI applications. Conduct customer training on cybersecurity awareness and AI risk management. Establish acceptable use policies for AI and regularly train staff about proper usage and the potential risks of AI-based workflows and solutions. There should also be rules around using public AI tools like ChatGPT. AI has the potential to unlock new levels of innovation and productivity. Still, if companies do not fully understand the risks around AI and use best practices to ensure the secure use of the technology, they will not be able to use AI to its full potential. Plus, they could leave themselves open to new and difficult-to-detect vulnerabilities.MSSP Alert Perspectives columns are written by trusted members of the managed security services, value-added reseller and solution provider channels or MSSP Alert's staff. Do you have a unique perspective you want to share? Check out our guidelines here and send a pitch to [email protected].