OpenAI Looks to India for AI Impact Insights

Srinivas Narayanan said that India had made AI use more impactful while reducing user harm through initiatives like the Digital Public Infrastructure (DPI)…reports Asian Lite News

India has a unique approach to making AI more beneficial for people and OpenAI is committed to learning from it, Srinivas Narayanan, Vice President of Engineering at OpenAI, said here on Wednesday.

He said that India had made AI use more impactful while reducing user harm through initiatives like the Digital Public Infrastructure (DPI) and OpenAI was committed to learning from the country in order to make this technology more impactful.

Speaking at the ‘Global IndiaAI Summit’ here, Narayanan said an ascendant India has a leading role to play in the development of digital institutions and leading the beneficial adoption of AI towards becoming ‘Viksit Bharat’ by 2047.

“We are really committed to learning more from India. We want AI to be aligned with human values and safety is deeply at the core of our mission. We want to maximise the benefits while reducing the harms,” he told the gathering.

He said that India is already harnessing the power of AI.

“First, AI has added speed and dynamism to the already dynamic entrepreneurial ecosystem in India. Entrepreneurs are building innovative products with tools like ChatGPT which are helping them accelerate in a completely new way. We’re reducing the cost of intelligence,” Narayanan told a packed house.

He said that OpenAI is enabling developers to write code and helping them create completely conversational and natural interfaces to computing.

“OpenAI is committed to supporting the IndiaAI mission which has set a great example not just in Global South but also around the world,” the company executive noted.

Meanwhile, ChatGPT maker OpenAI Board has formed a Safety and Security Committee led by directors Sam Altman (CEO), Bret Taylor (Chair), Adam D’Angelo, and Nicole Seligman, the company said on Tuesday. 

According to the AI startup, this committee will be responsible for making suggestions to the full Board on critical safety and security decisions for the company’s projects and operations.

“OpenAI has recently begun training its next frontier model and we anticipate the resulting systems to bring us to the next level of capabilities on our path to AGI,” OpenAI said in a blogpost.

“While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment,” it added.

The first task of this committee will be to evaluate and further develop OpenAI’s processes and safeguards over the next 90 days.

After 90 days, the committee will share its suggestions with the full Board.

“Following the full Board’s review, OpenAI will publicly share an update on adopted recommendations in a manner that is consistent with safety and security,” the company mentioned.

In addition, the ChatGPT maker said that OpenAI technical and policy experts Aleksander Madry (Head of Preparedness), Lilian Weng (Head of Safety Systems), John Schulman (Head of Alignment Science), Matt Knight (Head of Security), and Jakub Pachocki (Chief Scientist) will also be on the committee.

ALSO READ: India’s retail market attracts investors

Leave a Reply

Your email address will not be published. Required fields are marked *