TECH
No Approval Required for New Models, Boosting Innovation and Development in Tech Sector: says IT minister
A recent regulation by the Indian government on generative AI, also known as artificial intelligence (AI) that can create realistic and creative text formats, caused a stir in the tech startup ecosystem. Small businesses feared the new rule would stifle innovation and make it difficult for them to compete with larger players. However, a recent clarification from government minister Rajeev Chandrasekhar has alleviated some of these concerns.
Small businesses apprehensive about new generative AI guidelines
The new regulation stipulated that "companies that develop generative AI models" needed government permission before users could access their programs. This sent shivers down the spines of small generative AI startups, who worried the approval process would be cumbersome and hinder their growth. Industry leaders like Aravind Srinivas of Perplexity AI expressed their disapproval, calling the rule detrimental to India's progress in generative AI. Investors who back these businesses, such as Martin Casado of Andreessen Horowitz, echoed similar sentiments, arguing that the regulation stifled innovation and ultimately harmed the public.
Government clarifies: New rule targets big tech, not startups
Minister Chandrasekhar recently clarified that the new rule primarily targets "large generative AI models." Small businesses can breathe a sigh of relief, as they won't need government approval to deploy their programs. According to the minister, the regulation aims to curb the use of "untested generative AI models" on the Indian internet. He further explained that the permission process acts as a safeguard for big companies, protecting them from lawsuits if their programs generate inaccurate information.
Lingering questions around the generative AI regulation
While the exemption for small businesses is a welcome development, some questions remain. Firstly, there's the issue of untested AI models from startups potentially generating inaccurate information. A recent news report highlighted how Ola's new generative AI program, Krutrim, has been known to produce factual errors.
Ensuring responsible AI development and user protection
The new regulation mandates that big companies using untested generative AI models must obtain government permission and clearly disclose to users the potential for inaccurate outputs. Additionally, the government requires these companies to implement a traceability mechanism by embedding identifiable markers within generated content. This would allow authorities to trace the source of misinformation or deepfakes back to the responsible party.
Small businesses relieved, but seek further clarity
Pratik Desai, founder of KissanAI (a generative AI program for agriculture), was initially apprehensive about the new rule. However, Minister Chandrasekhar's clarification brought a sense of optimism. Desai agrees with exempting small businesses but emphasizes the need for a clear definition of a "small business" to eliminate ambiguity.
The core debate: Balancing innovation and control
The controversy surrounding the generative AI regulation highlights a larger debate – how should the government approach the legal implications of generative AI models like Gemini and ChatGPT? The government also expresses concern over the potential for these programs to generate content that contradicts its stances. For instance, the Ministry of Electronics and Information Technology (MeitY) took issue with the responses produced by Google's AI program, Gemini, to a query about Prime Minister Narendra Modi.
Understanding and mitigating generative AI errors
The outputs generated by these AI models are influenced by various factors, including the underlying training data scraped from vast amounts of web content and the additional algorithmic filters layered on top. Instances of errors, known as "hallucinations," can be attributed to shortcomings in these areas. Wrapper models, which involve layering additional functionalities on top of pre-existing generative AI models, add another layer of complexity to the final outputs, influenced by the specific algorithms employed.
Legality of the generative AI regulation
While not a law, the enforceability of the regulation is unclear in the absence of a specific law governing generative AI models in India. The government justifies the regulation as a "due diligence" measure aligned with the Information Technology Rules, 2021, which online intermediaries must adhere to. Minister Chandrasekhar explained that the focus on the upcoming elections in the rule stems from the need to curb the spread of fake news and manipulated videos that could influence election outcomes.
Finding the right balance: Innovation vs. public safety
The government's attempt to regulate generative AI programs underscores the need for a cautious approach. Striking a balance between fostering innovation in this evolving field and safeguarding the public from potential harm is paramount. Open communication and collaboration between the government, small businesses, and big tech companies are crucial to navigating the responsible development and deployment of generative AI.
Latest News
STARTUP-STORIES