Trust the single biggest priority when it comes to AI: Salesforce India CEO

In an interaction with IANS, Bhattacharya said they are considering setting up a public sector initiative for AI and will address things in a manner that the government wants them to comply with.

Update: 2024-03-09 06:00 GMT

 Arundhati Bhattacharya, CEO and Chairperson of Salesforce India (IANS)

NEW DELHI: As India acts tough on launching new artificial intelligence (AI) models in the country, trust is the single biggest priority to create safe and responsible large language models (LLMs) so that the data remains safe, according to Arundhati Bhattacharya, CEO and Chairperson of Salesforce India.

In an interaction with IANS, Bhattacharya said they are considering setting up a public sector initiative for AI and will address things in a manner that the government wants them to comply with.

“The government has a lot of stringent processes and rules which we need to be able to completely comply with. We intend to be able to do that and only then, we will go in for it. It's one of the reasons why we didn't start Public Sector AI immediately,” she informed.

According to her, Salesforce has an advantage because “we do things in a manner that ensures that issues with AI are screened and weeded out before it reaches the final stage".

The former SBI Chairperson said that the current government AI rules do not directly apply to them as they are not a B2C or end-user platform.

“We are a Cloud service provider. Anyone who is logging into our systems is doing so on behalf of a company and, therefore, they are dealing more with company matters than those that are pertaining to individual consumers,” she explained.

“We basically allow people to bring in their own AI models. We ask customers to tell us which model they would be comfortable with and we will integrate that model with you, for you.”

Most importantly, “we have our own robust trust layer. Anything that is coming out of our AI systems has to pass through the trust layer,” she told IANS.

Designed for enterprise AI, the ‘Einstein Trust Layer’ is a collection of features that help companies benefit from generative AI without compromising security or safety standards.

New features that are added to the ‘Einstein Trust Layer’ is customer-configured data masking, enabling admins to select the fields they want to mask, providing greater control.

Additionally, the audit trail and feedback data collected from AI prompts and responses is now stored in Data Cloud, where it can be easily reported on or used for automated alerts through Flow and other Einstein 1 Platform tools.

The company has just announced the availability of Einstein 1 Studio, a set of low-code tools that enables admins and developers to customise Einstein Copilot -- the conversational AI assistant for customer relationship management (CRM) -- and seamlessly embed AI across any app for every customer and employee experience.

Tags:    

Similar News