M+E Daily

HITS 2024: Box Explores the Opportunity Presented by AI

Keeping content secure is an evolving challenge and artificial intelligence (AI) presents an opportunity to fundamentally change how the media and entertainment (M&E) industry protects its most mission-critical content,” according to Manoj Asnani, VP of product management – security and compliance at Box.

New technological innovations and emerging threats are altering the security landscape on an almost daily basis.

But, during the security breakout session “Keep Your Content Secure in an AI-powered World,” May 22 at the Hollywood Innovation & Transformation Summit (HITS), Asnani explored how M&E companies can understand the new opportunities and challenges presented by AI, adjust their security strategies to capitalise on the new AI landscape, and leverage AI to empower their security postures across the M&E supply chain.

“We all know that AI has been dominating the conversations,” he said at the start of the session, noting how, at dinner the previous night, “everyone was only interested in talking about AI and sort of how it’s going to unlock productivity [and] how it’s going to make organisations much more effective.”

However, “what I really want to focus on is how will AI pair with security,” he said. “There’s a good and there’s a bad” side and he would be discussing “both sides,” he pointed out.

He told attendees: “Let’s start with this question: Do we think we’re ready?  Show of hands? OK. Not a single hand went up.”

There is “a lot of good to be had with AI, especially with generative AI, that’s been sort of on a tear in the last 18 months or so,” he said. “It can help you summarise vast amounts of content and really get a much better understanding. It can actually help you identify threats much more efficiently, faster and more accurately as well. And it can actually help you derive deep insights from the content and you can use that insight to be a lot more efficient in terms of how you run your teams [and] how you run your organisations. So tons of good to be had with AI.”

But he was quick to add: “There are concerns [also]…. I’ve talked to a number of security professionals and a number of CIOs and even CEOs. The biggest concern that I keep hearing from them is that they’re freaked out about how their organisation’s data and content is going to be used by the AI models. What kind of permissioning AI models are [there going to be] that actually could use the data they have in their enterprise that they otherwise wouldn’t want it to be used? So there is that.”

He moved on to what he called the “second big thing I think that’s worth calling out” which is that “there could be potential vulnerabilities in the AI models” themselves. “So we’re not just talking about how you use AI to generate efficient attacks … but the vulnerabilities in the models” themselves. “So think about things like: Could the model be actually poisoned with stale or fake or malicious data so that then the output that you get from it is much more harmful to your organisation? Could there be bad actors really injecting malicious prompts into AI and tricking AI to essentially give you the answer that otherwise you don’t want them to get. So there’s this whole vulnerability aspect of it within the model.”

And, with that, are the threat of AI attacks, he said. “The bad actors can actually use AI to generate novel attacks and they can actually generate much more efficient attacks. [The] latest example is writing phishing emails, which are much more realistic and people fall for it much more than they would before. So there are a lot of challenges with this as well.”

Citing IDC data, Asnani said that 84% of organisations are either already using AI or considering the use of AI, which he said is “massive.” And 50% of the organisations are “really concerned about how their proprietary information is going to be used by the AI models, which is not surprising. I would expect this number to be actually much higher.”

However, on the other hand, he said, citing the findings of a McKinsey survey: “You actually don’t want to be left behind, because when I talk to security teams … the organisations, especially security teams, who use AI to be efficient at securing the organisation, on an average have almost $2 million in savings versus the  security teams who actually don’t use AI.”

There is, therefore, a “fantastic opportunity here for us to make the security teams a lot more efficient at what they do,” he added.

HITS Spring was presented by Box, with sponsorship by Fortinet, SHIB, AMD, Brightspot, Grant Thornton, MicroStrategy, the Trusted Partner Network, the Content Delivery & Security Association (CDSA) and EIDR, and was produced by MESA in partnership with the Pepperdine Graziadio School of Business.