Articles by Jayesh Shinde on Digit.in https://www.digit.in Digit represents the largest community of technology consumers (buyers and users) in India Fri, 13 Dec 2024 08:38:58 +0000 en-GB hourly 1 https://static.digit.in/favicon-1-96x96.png Articles by Jayesh Shinde on Digit.in https://www.digit.in 32 32 Android XR: 5 cool things about Google’s new AR-VR operating system https://www.digit.in/features/general/android-xr-5-cool-things-about-google-new-ar-vr-operating-system.html Fri, 13 Dec 2024 08:38:53 +0000 https://www.digit.in/?p=681432 The world of consumer technology is used to the constant churn of new devices – from folding phones to AI-powered laptops, and everything in between. So when a shift comes along that promises to accelerate the adoption of a whole new computing category, you have to sit up and take notice. That’s what Android XR is promising. 

Android XR is this new operating system from Google, crafted in collaboration with Samsung and steeped in AI, AR, and VR features, which promises to move us beyond tapping and swiping on our touchscreens to an entirely different way of engaging with our devices. Where Android XR helps our headsets and glasses understand not just what we want, but where we are and what’s around us. It’s the good old Android operating system evolving to learn new tricks, where technology platforms adapt to us and not the other way around.

Let’s take a closer look at 5 key features of Android XR, for now.

1) Multimodal input and spatial compute on Android XR

Android XR introduces a big shift in how we interact with our devices by supporting a wide range of input methods, according to Google’s blog. Instead of being restricted to taps and swipes, users can rely on hand gestures, eye tracking, voice commands, and even familiar peripherals like keyboards and mice to interact through Android XR-powered devices in the near future. 

Also read: Our extended reality truly begins in 2024 with the Apple Vision Pro

This multimodality provides unparalleled flexibility, allowing people to choose the interaction style best suited to their situation, whether navigating a virtual workspace or enjoying immersive entertainment. Beyond input diversity, Android XR leverages AI to imbue flat content with depth and spatial context. Images in Google Photos take on realistic dimensionality, Google Maps’ Street View will feel like stepping into a street rather than merely browsing it on screen, elevating the user experience beyond traditional screens.

2) Diverse app ecosystem

With Android XR, Google reimagines its signature applications – Maps, Photos, YouTube, and even Chrome – to thrive in 3D, mixed reality environments. What’s more, beyond Google’s own ecosystem, the Android XR platform will support familiar mobile and tablet apps from the Google Play Store. This backward compatibility will allow users to bring their favorite apps into the XR domain without developers reinventing the wheel. The result is an expansive library of experiences that range from productivity tools to immersive games. This expansive app support ensures that XR devices can transition effortlessly from entertainment to work, making them as versatile as a smartphone or laptop – just vastly more immersive.

3) Deep AI integration in Android XR

Central to Android XR’s vision is Gemini, Google’s integrated AI assistant. This isn’t just another digital helper, but an always-on context-aware companion that understands your surroundings, intentions, and needs all the time. According to Google, by analysing environmental cues and user behaviour, the AI can deliver timely insights – like directions to a nearby restaurant or safety tips as you navigate unfamiliar streets. 

In addition, Gemini’s AI-driven navigation can guide users through 3D worlds with precision, highlighting points of interest or critical data layers with minimal user effort. All of this makes XR devices genuinely helpful, not just visually impressive. 

4) Android XR-ready smart glasses and headsets

Android XR emerges from a broader vision that Google shares with partners like Samsung and others. While the initial device – code named “Project Moohan” – promises a high-quality mixed reality headset set to launch in 2025, this is only the start. 

Also read: Samsung teams up with Google and Qualcomm for Project Moohan: Here’s what it is

The platform supports a spectrum of XR experiences, spanning fully immersive virtual reality realms, augmented overlays atop physical spaces, and even audio-centric applications for when visuals aren’t needed. The potential hardware ecosystem is extensive. By collaborating with hardware partners, including Qualcomm and startups like Lynx, Sony, and XREAL, Android XR aims to foster a market where different devices cater to diverse preferences, professions, and budgets. This approach ensures that XR doesn’t remain a niche novelty but evolves into a practical technology used for everyday tasks.

5) Developer-friendly foundations of Android XR

As much as Android XR is about user experience, it’s equally about empowering developers. And Google has made the platform accessible and interoperable from day one. Developers can leverage ARCore for augmented reality, OpenXR for advanced virtual experiences, and familiar tools like Android Studio, Jetpack Compose, and Unity to build and optimize XR apps and games. This foundation means creators can adapt their existing skill sets rather than mastering entirely new frameworks. It also encourages swift experimentation, letting developers prototype ideas and iterate until they produce compelling content. 

By enabling developers to craft experiences that seamlessly integrate with other Android devices, the platform ensures a cohesive ecosystem where hardware and software complement each other. The result? A richer catalog of XR apps poised to reshape how we learn, play, and communicate in a world increasingly defined by contextual computing.

What are your thoughts on Google’s announcement of Android XR? Will it challenge Apple’s Vision Pro and its roadmap for XR devices and spatial computing? Who will come out on top? All these questions are bubbling in my head, which I’ll hopefully tackle in another article. Exciting times ahead for sure in the world of XR and spatial computing!

Also read: Meta Orion AR glasses make Apple Vision Pro look clunky

]]>
Sapient’s RNN AI model aims to surpass ChatGPT and Gemini: Here’s how https://www.digit.in/features/general/sapient-rnn-ai-model-aims-to-surpass-chatgpt-and-gemini.html Thu, 12 Dec 2024 05:49:23 +0000 https://www.digit.in/?p=680372 In the fast-moving AI industry, there’s been an interesting development: Singapore-based startup Sapient Intelligence has announced the successful closure of its seed funding round, securing $22 million at a valuation of $200 million. This investment, led by prominent entities such as Vertex Ventures, Sumitomo Group, and JAFCO, primarily aims to advance Sapient’s mission – which is to address the limitations inherent in current AI models like OpenAI’s ChatGPT, Google’s Gemini and Meta’s Llama.

Also read: Osmo’s scent teleportation: How AI is digitising scent for the first time

That’s right. A new AI startup wants to challenge the popular incumbents like OpenAI, Meta, Google and others with a more powerful, more comprehensive AI foundational model which better understands human nuance and has a longer lasting memory. 

An AI framework based on neuroscience

Traditional AI models, including GPT-4 and Gemini, primarily utilise transformer architectures that generate predictions by sequentially building upon prior outputs. Yes, this is the same concept outlined in the ‘Attention Is All You Need’ research paper from 2017, that experts all over the world consider as a landmark moment in NLP and kickstarting the era of ChatGPT. 

While effective for various tasks, this autoregressive method often encounters challenges with complex, multi-step reasoning, leading to issues like hallucinations – instances where the model produces incorrect or nonsensical information.

Also read: Google Gemini controversies: When AI went wrong to rogue

Austin Zheng, co-founder and CEO of Sapient Intelligence, elaborated on these challenges in a recent interview with Venturebeat, stating, “With current models, they’re all trained with an autoregressive method… It has a really good generalization capability, but it’s really, really difficult for them to solve complicated and long-horizon, multi-step tasks.”

To overcome these limitations, Sapient is developing a novel AI architecture that draws inspiration heavily from the world of neuroscience and mathematics. This new design integrates transformer components with recurrent neural network (RNN) structures, emulating human cognitive processes, according to Sapient. Zheng explained, “The model will always evaluate the solution, evaluate options and give itself a reward model based on that… [It] can continuously calculate something recurrently until it gets to a correct solution.”

Recurrent Neural Networks (RNNs) in general process input in a sequential manner, maintaining a hidden state that captures information from previous inputs – this makes them great for data handling tasks where order is crucial, such as time-series forecasting. However, this same sequential processing nature of RNNs can lead to challenges in capturing long-range dependencies due to certain fundamental issues like the vanishing gradient problem. This is where Transformers help, where transformer-based models utilise a self-attention mechanism – which allows them to process all elements of the input sequence in parallel, effectively capturing relationships between distant elements without relying on sequential order. This parallel processing enables transformers to handle long-range dependencies more efficiently, making them particularly effective in tasks like language translation, where understanding the context across an entire sentence is essential. 

Think of RNNs as the CPU which can only do sequential tasks, and Transformers as GPUs which are great at parallel processing. Both in tandem are great for complementing each other’s strengths and weaknesses, which is what Sapient’s attempting to infuse its AI model with.

Sapient’s AI model approach involves combining transformer components with recurrent neural network structures, which ultimately allows the model to evaluate its own solutions, consider various options, and assign rewards based on outcomes – similar to cognitive processes observed in humans. When presented with a problem, we humans also try to solve it by weighing multiple options as solutions and rewarding ourselves with a dopamine hit whenever we choose the right solution, right? 

By emulating these brain functions, Sapient claims its model can iteratively refine its outputs until achieving accurate solutions, thereby improving its ability to handle complex, multi-step tasks. This brain-inspired design aims to create more adaptable and efficient AI systems, reflecting a broader trend in the AI research field where insights from neuroscience are informing the development of advanced AI models with greater capabilities.

Future of Sapient’s new AI model

What’s more, according to Sapient, their AI architecture is designed to enhance the foundational model’s flexibility and precision, which they hope will allow it to tackle a broad range of tasks with greater reliability. This approach by Sapient isn’t too dissimilar from emerging reasoning models from industry leaders like OpenAI’s o1 series and Google’s Gemini 2.0, for instance. According to their spokesperson, Sapient’s model has demonstrated superior performance in benchmarks such as Sudoku, achieving 95% accuracy without relying on intermediate tools or data – which is being hailed as a significant advancement over existing neural networks.

The company is also focusing on real-world applications, including autonomous coding agents and robotics. For instance, Sapient is deploying an AI coding agent within Sumitomo’s enterprise environment to learn and contribute to the company’s codebase. Unlike some existing models that require human oversight, Sapient’s agents aim to operate autonomously, continuously learning and improving through trial and error. AI agents are incoming!

Also read: AI agents explained: Why OpenAI, Google and Microsoft are building smarter AI agents

Sapient’s advancements reflect a broader industry trend towards developing AI systems capable of complex reasoning and planning. OpenAI’s recent release of the o1 model, described as the “smartest model in the world,” signifies a shift from prediction-based models to those emphasising reasoning capabilities. Similarly, Google’s Gemini 2.0 aims to perform tasks and interact like a virtual personal assistant, showcasing improved multimodal capabilities.

These developments suggest that the AI industry is moving towards more autonomous and adaptable systems, capable of handling intricate tasks with minimal human intervention. Sapient’s unique approach, combining insights from machine learning, neuroscience, and mathematics, positions it as another player that’s making its moves in the rapidly evolving AI landscape.

Also read: ChatGPT Canvas explained: What is it and how to use new OpenAI tool?

]]>
GTA 6 will be the biggest game of 2025: Everything you should know https://www.digit.in/features/gaming/gta-6-will-be-the-biggest-game-of-2025-everything-you-should-know.html Wed, 11 Dec 2024 11:21:07 +0000 https://www.digit.in/?p=679885 One of the biggest events of the entertainment and gaming world in 2025 is hands down going to be the launch of the madly anticipated GTA VI / GTA 6 / Rockstar Games’ next blockbuster game. Believe me when I say it’s not an exaggeration that every gamer from around the world is waiting with bated breath for GTA 6 to release. And according to Rockstar Games, the developer of GTA titles over the years, and Take-Two (its parent company and publisher), GTA 6 is on track to release in 2025.

Also read: GTA 6 release date: Here’s when the most-awaited game can launch in 2025

This might be contrary to popular belief, but not everyone is elated by the impending release of GTA 6. Oh no siree! According to a revealing Bloomberg report, rival game publishers are nervously waiting for Rockstar Games and Take-Two to commit to a final release date of GTA 6 – aiming to steer their own games well clear of the GTA effect, to give them maximum chances for success, of course. Bloomberg claims that other game publishers won’t be committing to their game release dates in a hurry.

It makes sense, if you think about it. When every gamer and their grandmother will be buying GTA 6 in 2025, a game that’s expected to be very big in terms of its scope, playtime and incremental cost, a sizeable portion of the gaming community will likely stay away from buying any other game for at least a few months or more. Such is the allure and magic of the GTA franchise!

Grand Theft Auto: Brief history of a gaming phenomenon

The Grand Theft Auto (GTA) series, developed by Rockstar Games, has been a cornerstone of the gaming industry since its inception over 27 years ago dating back to 1997. Known for its expansive open-world environments and satirical take on American culture, the GTA game series has continually pushed the boundaries of interactive entertainment throughout its history.

The journey into gaming folklore began with the original Grand Theft Auto, which was released in November 1997 for MS-DOS and Windows. Recent and younger fans of later GTA games won’t even recognize the original, which was a top-down, 2D game that introduced players to a sandbox-style gaming experience across three fictional cities: Liberty City, San Andreas, and Vice City. Its success led to the expansion packs GTA: London 1969 and GTA: London 1961, both set in 1960s London. 

In 1999, Grand Theft Auto 2 continued the top-down perspective, this time set in the futuristic “Anywhere City.” But it wasn’t until GTA III in 2001 where the series went through a transformative shift, embracing 3D graphics and a third-person perspective for the very first time in its franchise history. Set in a reimagined Liberty City, the game offered unprecedented freedom and a cinematic narrative, setting a whole new standard for open-world games at the time. 

Subsequently, GTA: Vice City (2002) transported players to a 1980s-inspired city, introducing voice acting, musical soundtrack and an attractive and vibrant, neon-lit world that introduced millions of new fans to the franchise. In 2004, GTA: San Andreas expanded the scope of the GTA gaming universe with an even bigger sandbox environment comprising an entire state, complete with three cities and rural areas, along with character customisation and RPG elements.

With support for 1920×1080 graphics, GTA IV (2008) returned to Liberty City with a focus on realism and HD graphical detail previously unseen in GTA games, while also introducing an online multiplayer mode for the very first time. Finally, GTA V (2013) elevated the GTA gaming series further by featuring not just one but three playable characters and an expansive open world encompassing both urban and rural areas. Its online game mode, called GTA Online, became a massive success, contributing to the game’s enduring popularity for over a decade and counting. By the end of 2023, the GTA game series had shipped over 425 million units (with 200 million units comprising GTA V alone), making it one of the best-selling video game franchises of all time.

GTA 6: Leaks, trailers and speculation

Given the history and legacy of the GTA game franchise, it’s of course no surprise that anticipation for Grand Theft Auto VI (GTA 6) has been fueled by a steady stream of leaks, teasers, and speculations, each cranking up the mystery and intrigue surrounding Rockstar Games’ upcoming gaming title.

You have to keep in mind that there’s been no new GTA game released since GTA V, which was all the way back in 2013. With expectation rife, the gaming community was rocked by a massive leak of over 90 video clips in September 2022 showcasing early development footage of GTA 6, which later Rockstar Games acknowledged to be true. These clips offered the earliest glimpses into potential gameplay mechanics, characters, and the sandbox world of GTA 6, sparking widespread discussions and analyses among gaming fans all over the world.

Also read: New GTA 6 gameplay leaked: Smaller storyline, new characters, and more revealed

Nothing much happened after that for over a year when in December 2023, Rockstar Games released the first official trailer for GTA 6, which quickly became a cultural phenomenon. The trailer became the most watched non-music video on YouTube within its first 24 hours, and has since then amassed over 225 million views in total, underlining the immense anticipation for the game. 

Then again after a year, in December 2024, Rockstar Games seemingly dropped a subtle hint about GTA 6 within the promotional material for the GTA Online DLC, Agents of Sabotage. The nod to GTA 6 was spotted by an eagle-eyed gamer, who noticed what looked like a Hidden Package from GTA: Vice City in the artwork of the online DLC, which many interpreted as an easter egg to the upcoming GTA 6 game’s setting. While Rockstar Games has remained tight-lipped about GTA 6’s setting and larger themes, subsequent leaks and reports suggest a return to Vice City as all but confirmed, which is the GTA universe’s’ fictional take on Miami. 

Leaks have also hinted at a dual-protagonist system in GTA 6, with players getting to choose between playing either a male or female character. Borrowing a page from role-playing games, GTA 6 is believed to have a pretty extensive character personalisation and customisation option built-in, one that will allow gamers of GTA 6 to make the playable character as personal as they possibly can. Rumours of in-game social media integration have also circulated in the recent past, especially after the original trailer release of GTA 6. Of course, not too long of a wait remaining now before the game officially releases.

GTA 6 release date in 2025

Everyone from GTA fans to gaming industry analysts have speculated on a potential release window of the hotly anticipated game, with some reports indicating a fall 2025 launch of GTA 6. This release timeframe for GTA 6 originates from a senior executive of Rockstar Games’ parent company.

Following the record-breaking trailer launch of GTA 6, Rockstar Games cursorily suggested a release window of 2025. Later during a May 2024 earnings call, no less than the Take-Two Interactive CEO Strauss Zelnick commented on GTA 6’s release timeline in 2025, saying that he was feeling “highly confident that we’ll deliver [GTA 6] in fall of 2025.”

Make of that what you will. Whether GTA 6 launches around Diwali 2025 is anybody’s guess, but I think there’s a very good chance we’ll all be playing the game before Christmas 2025 for sure. Unless the game’s tragically delayed, which it has been by close to a decade, if you think about it. Let’s hope before the end of 2025, gamers around the world will be witnessing the cultural phenomenon that GTA 6 is destined to be!

Also read: GTA 6 launch timeline confirmed by parent company: Here’s when it will launch in 2025

]]>
Google Willow quantum chip explained: Faster than a supercomputer https://www.digit.in/features/general/google-willow-quantum-chip-explained-faster-than-a-supercomputer.html Tue, 10 Dec 2024 05:44:59 +0000 https://www.digit.in/?p=678517 Google might be battling antitrust cases left, right and centre, but that isn’t stopping the search engine giant from taking big strides into the future of quantum computing. Google showed off Willow, its latest  and most powerful quantum computing chip till date. According to Google and Alphabet CEO Sundar Pichai, Willow is a “state-of-the-art quantum computing chip with a breakthrough that can reduce errors exponentially.” 

Also read: Google unveils quantum computing chip Willow and even Elon Musk is impressed

In his online tweet, Pichai shared more words about the Willow quantum chip’s exceptional performance – which according to Hartmut Neven, Google’s Quantum AI chief, is unlike anything that classical computing can match.

In Google’s official blog post on the announcement, Neven claimed that Willow performed a standard computational task in under five minutes, which would otherwise take one of today’s fastest supercomputers 10 septillion (or 1025) years to complete. Sounds crazy, doesn’t it?

If there’s one thing certain it’s that Google’s latest quantum computing chip, Willow, represents a significant advancement in the field, showcasing unprecedented computational capabilities. Let’s check them out one by one…

Inside Google Willow quantum computing chip

Willow is Google’s state-of-the-art quantum processor, featuring 105 qubits – quantum bits that can exist in multiple states simultaneously, enabling complex computations beyond the reach of classical computers (which are capable of doing only binary operations in 0 and 1 states). Developed at Google’s Quantum AI lab, Willow utilises superconducting materials and operates at ultra-low temperatures to maintain quantum coherence. 

Compared to the IBM Heron H2 quantum chip announced in November 2024, which has 156 qubits, the Google Willow may seem to be weaker in comparison. But a notable innovation in Google Willow is its enhanced error correction rate, achieving an exponential reduction in error rates as qubits are added, which is a critical step toward building scalable quantum systems in the future.

Google Willow quantum chip’s key achievements

In Google’s internal benchmark tests, Willow performed a computational task in under five minutes – a feat that would take the world’s fastest supercomputers 10 septillion years (10^25 years) to complete. This performance itself surpasses Google’s 2019 Sycamore processor, which solved a problem in 200 seconds that classical computers would need 10,000 years to tackle.

Also read: Shaping the future of quantum computing: Intel’s Anne Matsuura

Another critical advancement with Google Willow is its enhanced error correction – which is insanely important, especially for quantum computers. By increasing the number of qubits and implementing real-time error correction, Willow reduces error rates exponentially, a milestone published in Nature. This development addresses a longstanding challenge in quantum computing, where adding qubits often introduces more errors into the system. At least, Google Willow seems to have found a way to get around this foundational roadblock towards building a quantum computing system in the near future.

Importance of the Google Willow quantum chip

Google Willow’s breakthroughs in computational speed and error correction in quantum computing definitely mark a pivotal step toward practical deployments of quantum computing. These advancements open avenues for real-world applications across various industries. In the field of AI, it will enhance machine algorithms and data analysis like never before. Help us find new medicine and pharmaceutical drugs for increasingly more personalised medical treatments. Quantum computing can also optimise energy systems in a way that classical computers just can’t, and pave the way for advancements in fusion energy research at an unprecedented rate.

By overcoming key obstacles in quantum error correction, Google Willow brings us closer to quantum computers that can solve complex problems currently deemed too big and time consuming for classical systems.

With the unveiling of Google’s Willow chip, are you dreaming of owning a quantum computing PC in your home? That will take some time, as current quantum computing systems are highly sensitive, requiring ultra-low temperatures and specialised environments, making them impractical for personal use. According to experts, quantum computing will disrupt fields like cryptography, medicine and AI, but will be largely accessible through cloud-based services rather than personalised devices. Yeah, despite rapid advancements, it may be several decades before quantum computers become commonplace in households around the world.

Also read: PQC encryption standardised: How they secure our digital future in quantum computing era

]]>
From BFSI to super apps: Protectt.ai explains future of mobile app security https://www.digit.in/features/mobile-phones/from-bfsi-to-super-apps-protecttai-explains-future-of-mobile-app-security.html Mon, 09 Dec 2024 04:54:27 +0000 https://www.digit.in/?p=677560 India has about 700 million smartphone users, with close to 14 billion UPI transactions worth ₹20 lakh crores happening on a monthly basis. Beneath the glossy veneer of consumer technology lies this complex ecosystem where smartphone innovation meets the necessity of digital commerce. And safeguarding these devices and the apps ecosystem inside them goes beyond a simple antivirus program. It’s about runtime defenses, cloud-based analytics, and AI-driven intelligence that adapt to evolving cyber threats, according to Pankaj Patankar, Head of Marketing for Protectt.ai Labs Pvt Ltd. 

In an exclusive interview, Patankar explains how Protectt.ai is aiming to reshape the mobile app security landscape. What struck me most was Protectt.ai’s intense focus on real-time, in-app threat protection and the level of sophistication they bring to fighting the hidden – and sometimes not-so-hidden – adversaries lurking behind every suspicious link, tampered APK, or cleverly disguised piece of malware.

Providing constant mobile app security

Mobile security is often portrayed as a tug-of-war between developers who patch vulnerabilities and hackers who exploit them. Protectt.ai flips this dynamic on its head. Their approach is not just to block known threats, but to anticipate and neutralize them as they happen.

“At Protectt.ai, our core strengths in the mobile app security landscape are centered around our advanced Runtime Application Self-Protection (RASP) technology, which sets us apart from competitors,” Patankar explained. The mention of RASP (Runtime Application Self-Protection) particularly piqued my interest. Rather than relying solely on perimeter defenses, RASP technology defends the app from within, making it harder for attackers to manipulate code or data.

As Patankar put it, “We leverage sophisticated AI and ML algorithms for enhanced threat detection, allowing us to adapt quickly to evolving attack vectors.” It’s a sentiment that encapsulates the company’s approach: treat security as a living, evolving system, not a static checklist of defenses.

Also read: Cybersecurity in Age of AI: Black Hat 2024’s top 3 LLM security risks

Traditional mobile antivirus solutions rely on signature-based detection, which are great for known threats but less effective against zero-day exploits or entirely new malware species. Protectt.ai, on the other hand, uses a cloud-based analysis system and next-gen runtime capabilities.

“Our RASP solution provides continuous, in-app security, detecting and responding to threats as they occur, ensuring immediate mitigation of vulnerabilities,” Patankar told me. This is critical because a threat can appear out of nowhere – a malicious snippet of code injected into a supposedly benign update, or a cleverly disguised phishing attempt exploiting a user’s trust in a familiar brand.

By maintaining a scalable cloud infrastructure, Protectt.ai can correlate threat intelligence across vast data sets. According to Patankar, “Our scalable cloud infrastructure enables efficient threat intelligence and provides actionable insights to strengthen our clients’ mobile app security posture.”

How Protectt.ai is bridging the mobile app security gap

Downloading apps only from official app stores like Google Play or Apple’s App Store is often the first recommended step towards mobile app safety that we all know. These platforms do perform initial security scans, creating a reliable environment for users. However, even after these apps are downloaded, vulnerabilities can emerge.

“While Play Store and App Store conduct rigorous security checks before publishing apps on their platform, challenges arise after installation,” Patankar explained. Attackers can reverse-engineer apps, tamper with their code, and repackage them to distribute malicious versions via phishing links. Once installed, these rogue apps can compromise user data or even carry out unauthorized financial transactions.

“This evolving threat landscape requires a proactive approach beyond the initial checks offered by app stores. Protectt.ai addresses this critical gap with AppProtectt, a state-of-the-art solution that continuously monitors apps in real-time, detecting and neutralizing dynamic threats such as malware, reverse engineering, and tampering.” Patankar emphasised. It’s a proactive stance – while stores focus on a one-time approval, Protectt.ai ensures the app remains safe throughout its lifecycle.

When you think of mobile security, the word “antivirus” might come to mind. But as Patankar noted, antivirus solutions mainly target known malware strains. Today’s threats extend much further. Reverse engineering, debugging, root detection bypasses, API manipulation – the list goes on and on.

“AppProtectt provides 75+ security capabilities such as Anti-Malware, Unsecured Wi-Fi, Reverse Engineering, Decompilation, Debugging, Root Detection, App Tampering protection, Screen sharing and Screen Mirroring Fraud Protection.” All of that’s quite a mouthful, but it boils down to a comprehensive, layered defense that doesn’t just check for malware signatures – it watches for anything suspicious happening in the runtime environment.

Also read: WazirX hack: Confusing aftermath of the biggest cyberattack on Indian crypto exchange

Patankar summed it up: “While traditional antivirus software covers known malware and viruses, AppProtectt offers multi-layered defense mechanisms tailored to safeguard apps from a wide range of sophisticated mobile security threats.”

Few industries exemplify the need for robust mobile security more than BFSI (Banking, Financial Services, and Insurance). With millions of users performing sensitive transactions daily, a single breach can be catastrophic.

Patankar painted a vivid picture: “Our internal research suggests that more than 90% of apps in the BFSI sector are prone to reverse engineering.” That’s staggering, considering how many of us rely on mobile banking apps for everything from checking balances to making mortgage payments.

To drive the point home, he shared a real-world example: “For a leading private sector bank in India with 5+ million users, we implemented our solution. In 3 months, we saw an 87% reduction in screen mirroring cases.” That’s the kind of tangible result that stands out – less theoretical and more like a real-life intervention that prevents fraud and preserves trust.

AI-driven threat detection in zero trust environment

Looking beyond the present, Patankar predicted emerging scenarios where the complexity of mobile apps would continue to grow. “Super apps are the future, combining multiple services like messaging, payments, shopping, and travel into a single platform,” he said. More functionalities mean broader attack surfaces, reinforcing the need for a robust, adaptive security posture.

AI-driven threat detection, behavioral analytics, and zero trust frameworks are all on the horizon. Patankar was optimistic: “By offering diverse functionalities under one roof, super apps boost user engagement and stickiness…These technologies will help identify and neutralize threats in real-time, often before the user is even aware, by analyzing patterns and anomalies in data.”

In other words, tomorrow’s mobile app security won’t just react to threats – it’ll anticipate them.

One negative about security solutions is often that they sacrifice user experience for protection. If apps become sluggish or start throwing false positives left and right, users lose patience – and possibly trust.

Patankar addressed this: “Protectt.ai uses deep technology solutions – advanced AI and machine learning to analyze threats. This helps enhance product security capabilities and minimize false positives.” The result? Users can go about their business without constantly encountering unnecessary red flags.

Balancing top-tier security with seamless usability is no small feat. Yet, Protectt.ai seems committed to ensuring their solutions become unobtrusive guards that work quietly in the background, delivering peace of mind without making daily tasks more complicated.

As we wrapped up our conversation, Patankar hinted at what’s next for Protectt.ai. “We are all set to expand our footprint in the USA, Dubai, and the MEA region. We’re set to launch a series of innovative products to secure the end-to-end user mobile app journey,” he confirmed. This forward momentum signals that mobile app security is no longer a niche concern. With cyber threats evolving daily, the industry – and Protectt.ai in particular – must remain agile, continually refining defenses and preempting new vulnerabilities.

Also read: McAfee’s Pratim Mukherjee on fighting deepfake AI scams in 2024 and beyond

]]>
Complexities of Ethical AI, explained by Intel’s Lama Nachman https://www.digit.in/features/general/complexities-of-ethical-ai-explained-by-intel-lama-nachman.html Thu, 05 Dec 2024 08:07:18 +0000 https://www.digit.in/?p=676001 When we talk about artificial intelligence, the conversation often gravitates toward its tangible impacts — the algorithms that can predict our shopping habits, the machines that can drive cars, or the systems that can diagnose diseases. Yet, lurking beneath these visible advancements are intangible unknowns that most people don’t fully grasp. To shed light on these hidden challenges, I interviewed Lama Nachman, Intel Fellow and Director of the Intelligent Systems Lab at Intel.

Nachman is at the forefront of AI research and development, steering projects that push the boundaries of what’s possible while grappling with the ethical implications of these technologies. Our conversation delved into the less obvious obstacles in responsible AI development and how Intel is addressing them head-on.

The intangible unknowns of Ethical AI

“While technical aspects like algorithm development are well understood, the intangible unknowns lie in the intersection of stakeholder needs and the AI lifecycle,” Nachman began. She highlighted that these challenges manifest in subtle ways that aren’t immediately apparent to most people.

“From algorithmic bias causing invisible but significant harm to certain populations, to the complex balance of automation versus human intervention in the workforce,” she explained, “less obvious challenges include building genuine trust beyond technical reliability and the environmental impact of AI systems.”

Also read: Navigating the Ethical AI maze with IBM’s Francesca Rossi

One pressing issue is the advent of large language models. “With large language models, it has gotten much harder to test for safety, bias, or toxicity of these systems,” Nachman noted. “Our methods must evolve to establish benchmarks and automated testing and evaluation of these systems. In addition, protecting against misuse is much harder given the complexity and generalizability of these models.”

Intel’s approach to Ethical AI

As a pioneer in technology, Intel recognises the ethical implications that come with advancing AI technologies. Nachman emphasised Intel’s commitment to responsible AI development. “At Intel, we are fully committed to advancing AI technology in a responsible, ethical, and inclusive manner, with trust serving as the foundation of our AI platforms and solutions,” she said.

Intel’s approach focuses on ensuring human rights, privacy, security, and inclusivity throughout their AI initiatives. “Our Responsible AI Advisory Council conducts rigorous reviews of AI projects to identify and mitigate potential ethical risks,” Nachman explained. “We also invest in research and collaborations to advance privacy, security, and sustainability in AI, and engage in industry forums to promote ethical standards and best practices.”

Diversity and inclusion are also central to Intel’s strategy. “We understand the need for equity, inclusion, and cultural sensitivity in the development and deployment of AI,” she stated. “We strive to ensure that the teams working on these technologies are diverse and inclusive.”

She highlighted Intel’s digital readiness programs as an example. “Through Intel’s digital readiness programs, we engage students to drive awareness about responsible AI, AI ethical principles, and methods to develop responsible AI solutions,” according to Nachman. “The AI technology domain should be developed and informed by diverse populations, perspectives, voices, and experiences.”

Ethical AI challenges and lessons learned

Implementing responsible AI practices comes with its own set of challenges. Nachman was candid about the obstacles Intel has faced. “A key challenge we have as developers of multi-use technologies is anticipating misuse of our technologies and coming up with effective methods to mitigate this misuse,” she acknowledged.

She pointed out that consistent regulation of use cases is an effective way to address technology misuse. “Ensuring environmental sustainability, developing ethical AI standards, and coordinating across industries and governments are some of the challenges that we as an industry need to address together,” Nachman added.

Also read: Balancing AI ethics with innovation, explained by Infosys’ Balakrishna DR

When asked about the lessons learned, she emphasised the importance of collaboration and continuous improvement. “The biggest learning has been the importance of responsible AI development as a foundation of innovation,” she said. “We need multidisciplinary review processes and continuous advancement in responsible AI practices, as well as collaboration across industries, academia, and governments to drive progress in responsible AI.”

On the prospect of establishing a global policy on AI ethics, Nachman was thoughtful. “Global policy on AI ethics should centre human rights, ensure inclusion of diverse voices, prioritise the protection of AI data enrichment workers, promote industry-wide collaboration, responsible sourcing, and continued learning to address critical issues in AI development,” she proposed. “This policy should aim to ensure fairness, transparency, and accountability in AI development, protecting the rights of workers, promoting responsible practices, and fostering continued improvement.”

India’s role in shaping Ethical AI

India is rapidly becoming a global hub for AI talent and innovation. Intel is leveraging India’s unique position to advance responsible AI development through ecosystem collaboration. “Our initiatives in India reflect a deep commitment to fostering ethical AI practices while harnessing the country’s vast potential in the field,” Nachman shared.

Intel has launched several targeted programs in collaboration with government and educational institutions. “The ‘Responsible AI for Youth’ program, developed in collaboration with MeitY and the National e-Governance Division, aims to empower government school students in grades 8-12 with AI skills and an ethical technology mindset,” she said. “This initiative is crucial in preparing India’s next generation of innovators to approach AI development responsibly.”

Another significant initiative is the “AI for All” program, a collaborative effort between Intel and the Ministry of Education. “This self-paced learning program is designed to demystify AI for all Indian citizens, regardless of their background or profession,” Nachman explained. “By enabling over 4.5 million citizens with AI basics, Intel is helping to create a society that is not only AI-literate but also aware of the ethical implications of AI technologies.”

Furthermore, the “Intel AI for Youth” program, developed in collaboration with CBSE and the Ministry of Education, empowers youth to create social impact projects using AI. “With over 160,000 students trained in AI skills, this initiative is significantly contributing to India’s growing pool of AI talent,” according to Nachman.

“Through these programs and collaborations, Intel is not just leveraging India’s position as an AI hub but is actively shaping it,” Nachman emphasised. “By focusing on responsible AI development from the grassroots level up, Intel is helping ensure that as India becomes a global leader in AI, it does so with a strong foundation in ethical practices.”

Balancing data needs with privacy

Data privacy is paramount, especially with AI’s increasing reliance on vast amounts of data. Nachman detailed how Intel balances the need for data with the imperative to protect individual privacy.

“Intel’s commitment to privacy extends to its broader security innovations, developing both hardware and software solutions to enhance AI security, data integrity, and privacy across the entire ecosystem,” she explained. “These efforts aim to create a robust foundation for trustworthy AI deployment.”

At the core of Intel’s strategy is the development of Confidential AI. “This technology allows businesses to harness AI while maintaining stringent security, privacy, and compliance standards,” Nachman said. “It protects sensitive inputs, trained data, and proprietary algorithms, enabling companies to leverage AI capabilities without compromising confidentiality.”

Also read: AI agents explained: Why OpenAI, Google and Microsoft are building smarter AI agents

To ensure ethical considerations are at the forefront, Intel’s Responsible AI Advisory Council conducts rigorous reviews throughout AI project lifecycles. “Assessing potential ethical risks, including privacy concerns, is a key part of our process,” she noted. “Using a privacy impact assessment process for all datasets helps identify and mitigate privacy issues early in the development stage.”

Intel also invests heavily in privacy-preserving technologies such as federated learning. “This approach enables AI model training on decentralised data without compromising individual privacy,” Nachman explained. “It allows for the development of powerful AI models while keeping sensitive data secure and localised.”

She underscored the importance of respecting and safeguarding privacy and data rights throughout the AI lifecycle. “Consistent with Intel’s Privacy Notice, Intel supports privacy rights by designing our technology with those rights in mind,” she said. “This includes being transparent about the need for any personal data collection, allowing user choice and control, and designing, developing, and deploying our products with appropriate guardrails to protect personal data.”

Need for collaborative effort and education

According to Nachman, Intel’s commitment to responsible AI extends beyond its corporate initiatives. “Intel actively collaborates with the ecosystem, including industry and academic institutions,” Nachman shared. “We contribute to ethical AI discussions to address shared challenges and improve privacy practices across sectors.”

Furthermore, Intel emphasises education and awareness through programs like the AI for Future Workforce Program. “These efforts help in instilling a deep understanding of AI ethics and responsible development practices in the next generation of AI professionals,” she said.

Throughout the course of this interview, quickly it became very clear to me that responsible AI development is a multifaceted challenge requiring collective effort. “We as an industry need to address these challenges together,” Nachman asserted. “It’s not just about what one company can do, but how we can collaborate across industries, academia, and governments to drive progress in responsible AI.”

She stressed that the development of AI technologies should be informed by diverse populations and experiences. “The AI technology domain should be developed and informed by diverse populations, perspectives, voices, and experiences,” she reiterated.

Also read: Google Gemini controversies: When AI went wrong to rogue

]]>
Balancing AI ethics with innovation, explained by Infosys’ Balakrishna DR https://www.digit.in/features/general/balancing-ai-ethics-with-innovation-explained-by-infosys-balakrishna-dr.html Fri, 29 Nov 2024 07:25:57 +0000 https://www.digit.in/?p=672113 As AI systems become increasingly integrated into our daily lives, the ethical considerations surrounding their development and deployment have never been more critical. To delve deeper into this pressing issue, I interviewed Balakrishna D. R. (Bali), Executive Vice President and Global Services Head of AI and Industry Verticals at Infosys.

Bali’s insights shed light on how a global leader like Infosys navigates the complex terrain of AI ethics, balancing the relentless pursuit of technological advancement with a steadfast commitment to responsible practices.

Infosys’ AI vision grounded in responsibility

Infosys doesn’t just dabble in AI, says Bali, it has crafted a comprehensive vision that embeds ethical considerations into every facet of its AI endeavours. “We have enumerated the guiding principles in a Responsible AI (RAI) Vision and Purpose document, laying the foundation for all our AI pursuits,” Bali explained. “It is aligned with our corporate vision and values (CLIFE). A well-articulated RAI Vision is a critical first step for any AI-first enterprise.”

Also read: Navigating the Ethical AI maze with IBM’s Francesca Rossi

He emphasised that despite the rapid evolution of AI technologies — including newer models, agentic frameworks, and hardware platforms — the fundamental principles of Responsible AI remain unchanged. “The seven pillars of RAI at Infosys act as the north star for us and have become a critical differentiator for our AI offerings,” he said. These pillars include Transparency, Fairness, Equal Access, Human + AI (not Human vs. AI), Safeguarding Human Rights, Ethical Innovation, and Global Responsible AI Adoption.

To operationalize these principles, Infosys launched the Responsible AI Suite (AI3S) as part of Infosys Topaz. “It helps enterprises balance innovation with ethical considerations and mitigate risks due to AI adoption,” Bali noted. Driven by the Responsible AI Office — a dedicated team of cross-functional experts — the suite offers a framework that aims to monitor and protect AI models and systems from threats through technical, legal, and process guardrails.

Fairness in AI isn’t a box to be checked but a continuous commitment. Infosys approaches this challenge through interventions at three levels: Strategic, Tactical, and Operational.

At the strategic level, the company builds overarching frameworks and governance structures. “This is where we build well-crafted policies for procurement, deployment, and responsible AI talent reskilling,” Bali explained.

The tactical level involves mechanisms for continuous monitoring. “We install mechanisms like risk and impact assessments, conduct market scans of AI vulnerabilities, perform rigorous red-teaming, and conduct periodic audits,” he said.

Operationally, Infosys focuses on the right processes, legal frameworks, and technical guardrails. “This includes enabling developers with specialised toolkits for building responsibly and developing technical guardrails that monitor and filter the input and output,” Bali added.

Their “Responsible AI by Design” methodology is central to these efforts. It focuses on the nature of the use case, the type of models and data being used, and how the model is trained. “We analyse the use case from varied lenses, select the right model, assess the data used to fine-tune the model, and build runtime guardrails that detect and mitigate subtle biases in generated content,” Bali elaborated.

Also read: In pursuit of ethical AI

Data is the lifeblood of AI, but with great data comes great responsibility. Infosys ensures adherence to global regulations like GDPR, CCPA, and the EU AI Act. “For privacy, we employ RAI by design across the lifecycle,” Bali said. This includes privacy assessments and audits, formulating policies and governance, and implementing technical approaches like homomorphic encryption and federated learning.

He highlighted the importance of developer and user training, as well as automated systems to track and enforce data retention. “Process changes like data anonymization, minimization, and access controls are crucial,” he added.

Collaborative efforts for Ethical AI

Infosys understands that fostering ethical AI practices is not a solitary endeavour. “We have been working with multiple academic bodies, regulatory institutions, and governments as industry consultants for advancing responsible AI,” Bali shared.

Some notable collaborations include membership in the AI Safety Institute Consortium established by NIST, joining the Coalition of Content Provenance and Authenticity (C2PA), and participating in the Artificial Intelligence Governance Alliance (AIGA) spearheaded by the World Economic Forum. “Infosys is also a member of the ISO committees on AI and contributes to the development of future AI standards,” he noted.

Furthermore, the company has partnered with the Stanford University Institute for Human-Centred Artificial Intelligence and joined the AI Alliance alongside leading companies like IBM, Meta, and Intel.

Beyond corporate initiatives, Infosys leverages AI to drive social impact. Bali shared a compelling example: “We have developed an AI accessibility solution for hearing and visually impaired customers of a major broadcasting company. It is a real-time audio and visual captioning system that provides simultaneous scene and dialogue descriptions, enabling individuals with disabilities to fully experience and enjoy entertainment.”

Another initiative is Infosys Springboard, which uses AI to create a digital learning platform aimed at supporting underserved students and professionals in India. “We are building personalised learning assistants that help learners with customised learning paths, adapting to their individual needs and learning styles,” he explained.

AI’s environmental footprint is a growing concern, and Infosys is proactive in addressing it. “We manage our environmental concerns in AI through energy-efficient hardware, data centers, and optimised model designs,” Bali said. “We have built our own optimised AI Cloud with specialised infrastructure, focusing on reducing our AI carbon footprint.”

Infosys is also using AI to optimise its server and data center operations, managing cooling systems and workload distribution to minimise energy consumption. “We have collaborated with Shell to create an integrated solution for green data centers using immersion cooling technology,” he added.

Adopting “Green AI” techniques, Infosys leverages methods like quantization and pruning to reduce compute demands and energy usage. “We conduct intensive assessments to compute and calculate our Scope 3 emissions due to AI by selecting and working with environmentally conscious vendors and partners,” Bali emphasised.

Recognizing that the future of AI lies in the hands of today’s learners, Infosys invests heavily in education and training. “We are empowering the next generation of AI professionals across the globe in multiple ways,” Bali stated.

Initiatives include ethical AI workshops focusing on AI ethics and regional regulations, partnership programs with universities like IIT/IIMs and Kellogg’s, and internal learning platforms offering courses from basic foundations to practitioner levels. “Our internship program InStep has focused projects on responsible AI,” he added.

Also read: How is Sam Altman redefining AI beyond ChatGPT?

Infosys also contributes to open-source AI ethics tools and hosts hackathons to tackle some of the toughest problems in responsible AI. “We believe in fostering a culture of continuous learning and ethical awareness,” Bali said.

A vision for India’s AI future

Looking ahead, Infosys envisions AI as a catalyst for India’s growth across multiple sectors. “With the potential to be the AI talent capital of the world, we have to do pioneering work in frontier research,” Bali asserted.

He emphasised the need for scalable solutions and frugal innovation to reduce the cost of AI at the unit economics level. “We cannot deliver to our huge population if we cannot reduce cost per transaction to a minimum,” he warned. “We have to experiment and innovate in novel ways to achieve this and subsequently set the trend for AI adoption worldwide.”

Bali echoed Infosys Chairman and Founder Nandan Nilekani’s sentiment: “India has the potential to be the AI use-case capital of the world.” To realise this, he outlined a four-pronged approach:

  1. Instituting and streamlining AI governance and regulatory control via formulation of policies and standards.
  2. Creating AI safety bodies with centralised accountability to ensure ethical AI by enforcing regulations.
  3. Investing in R&D of technical guardrails to solve ethical AI design challenges and enable a talent pool in Responsible AI.
  4. Building platforms and ecosystems for idea exchanges.

Infosys isn’t just setting lofty goals; it’s actively working to achieve them. “We are doing our part by walking the talk in our own organisation and with our customers,” Bali affirmed.

In a world where the ethical implications of AI are under increasing scrutiny, Infosys is trying to embed responsibility into the very fabric of its AI endeavours. Bali’s and Infosys’ insights offer a roadmap for other organisations navigating the intricate balance between innovation and ethics.

Also read: AI agents explained: Why OpenAI, Google and Microsoft are building smarter AI agents

]]>
Navigating the Ethical AI maze with IBM’s Francesca Rossi https://www.digit.in/features/general/navigating-ethical-ai-maze-with-ibm-francesca-rossi.html Thu, 28 Nov 2024 05:43:24 +0000 https://www.digit.in/?p=671087 At a time when artificial intelligence is no longer the stuff of science fiction but an integral part of our daily lives, the ethical implications of AI deployments have moved from theoretical academic debates to pressing real-world concerns. As AI systems become more embedded in every aspect of our phygital existence, the question is no longer about what AI can do, but what it should be doing in a responsible manner. I had the opportunity to interview Francesca Rossi, IBM Fellow and Global Leader for AI Ethics at IBM, to delve into these complex issues.

Also read: How is Sam Altman defining AI beyond ChatGPT?

Francesca Rossi is no stranger to the ethical quandaries posed by AI. With over 220 scientific articles under her belt and leadership roles in organisations like AAAI and the Partnership on AI, she’s at the forefront of shaping how we think about AI ethics today and on building AI we can all trust.

Ethical challenges of rapid AI growth

“AI is growing rapidly – it’s being used in many services that consumers interact with today. That’s why it’s so important to address the ethical challenges that AI can bring up,” Rossi started off. She highlighted the critical need for users to trust AI systems, emphasising that trust hinges on explainability and transparency.

“For users, it’s important from an ethical standpoint to be able to trust the recommendations of an AI system. Achieving this needs AI explainability and transparency,” she said. But trust isn’t the only concern. Rossi pointed out that data handling, privacy, and protecting copyrights are also significant ethical challenges that need to be tackled head-on.

When asked how IBM defines ‘Responsible AI,’ Rossi detailed a comprehensive framework that goes beyond mere principles to include practical implementations.

“IBM built a very comprehensive AI ethics framework, which includes both principles and their implementations, with the goal to guide the design, development, deployment, and use of AI inside IBM and for our clients,” she explained.

The principles are straightforward yet profound, according to Rossi:

  1. The purpose of AI is to augment human intelligence.
  2. Data and insights belong to their creator.
  3. New technology, including AI systems, must be transparent and explainable.

But principles alone aren’t enough. Rossi emphasised the importance of turning these principles into action: “The implementation of these principles includes risk assessment processes, education and training activities, software tools, developers’ playbooks, an integrated governance program, research innovation, and a centralised company-wide governance in the form of an AI ethics board.”

Also read: Google Gemini controversies: When AI went wrong to rogue

IBM’s commitment to open and transparent innovation is also evident. “We’ve released our family of Granite models to the open-source community under an Apache 2.0 licence for broad, unencumbered commercial usage, along with tools to monitor the model data – ensuring it’s up to the standards demanded by responsible enterprise applications,” Rossi added.

Collaboration with policymakers is key

The role of policymakers in AI ethics is a hot topic, and Rossi believes that collaboration between companies and governments is crucial.

“As a trusted AI leader, IBM sees a need for smart AI regulation that provides guardrails for AI uses while promoting innovation,” she said. IBM is urging governments globally to focus on risk-based regulation, prioritise liability over licensing, and support open-source AI innovation.

“While there are many individual companies, start-ups, researchers, governments, and others who are committed to open science and open technologies, more collaboration and information sharing will help the community innovate faster and more inclusively, and identify specific risks, to mitigate them before putting a product into the world,” Rossi emphasised.

One might wonder how these high-level principles translate into practical measures within IBM’s AI systems. Rossi provided concrete examples: “IBM has developed practitioner-friendly bias mitigation approaches, proposed methods for understanding differences between AI models in an interpretable manner, studied maintenance of AI models from the robustness perspective, and created methods for understanding the activation space of neural networks for various trustworthy AI tasks.”

She also mentioned that IBM has analysed adversarial vulnerabilities in AI models and proposed training approaches to mitigate such vulnerabilities. “We made significant updates to our AI explainability 360 toolkit to support time series and industrial use cases, and have developed application-specific frameworks for trustworthy AI,” she added.

AI innovation within ethical boundaries

A common concern is whether strict ethical guidelines stifle innovation. On the contrary, Rossi sees ethics as an enabler rather than a hindrance. “AI can drive tremendous progress for business and society – but only if it’s trusted,” she stated.

She cited IBM’s annual Global AI Adoption Index, noting that while 42% of enterprise-scale companies have deployed AI, 40% are still exploring or experimenting without deployment. “Ongoing challenges for AI adoption in enterprises remain, including hiring employees with the right skill sets, data complexity, and ethical concerns,” Rossi said. “Companies must prioritise AI ethics and trustworthy AI to successfully deploy the technology and encourage further innovation.”

Also read: From IIT to Infosys: India’s AI revolution gains momentum, as 7 new members join AI Alliance

Building AI systems that prioritise ethical considerations is no small feat. Rossi acknowledged the hurdles: “We see a large percentage of companies stuck in the experimentation and exploration phase, underscoring a dramatic gap between hype around AI and its actual use.”

She pointed out that challenges like the skills gap, data complexity, and AI trust and governance are significant barriers. “IBM’s annual Global AI Adoption Index recently found that while around 85% of businesses agree that trust is key to unlocking AI potential, well under half are taking steps towards truly trustworthy AI, with only 27% focused on reducing bias,” she noted.

To address these challenges, IBM launched watsonx, an enterprise-ready AI and data platform. “It accelerates the development of trusted AI and provides the visibility and governance needed to ensure that AI is used responsibly,” Rossi explained.

India’s role in shaping global AI ethics

India is rapidly emerging as a major player in AI innovation, and Rossi believes the country has a significant role to play in shaping global AI ethics and governance.

“Given that the AI market in India is growing at a rapid pace, with some estimates suggesting it is growing at a CAGR of 25-35% and expected to reach 17 billion USD by 2027, AI ethics and governance will be key as the market continues to develop,” she said.

She highlighted recent initiatives like the Global IndiaAI Summit 2024 conducted by the Ministry of Electronics and Information Technology (MeitY), which focused on advancing AI development in areas like compute capacity, foundational models, datasets, application development, future skills, startup financing, and safe AI.

With India’s growing talent pool in AI and data science, education and training in AI ethics are paramount. Rossi mentioned that IBM researchers in India are focused on AI ethical challenges across IBM’s three labs in the country: IBM Research India, IBM India Software Labs, and IBM Systems Development Labs.

“These labs are closely aligned to our strategy, and their pioneering work in AI, Cloud, Cybersecurity, Sustainability, Automation is integrated into IBM products, solutions, and services,” she said.

Future of AI ethics

Looking ahead, Rossi is optimistic but cautious about the evolution of AI ethics over the next decade. “Investing in AI ethics is crucial for long-term profitability, as ethical AI practices enhance brand reputation, build trust, and ensure compliance with evolving regulations,” she asserted.

IBM is actively building a robust ecosystem to advance ethical, open innovation around AI. “We recently collaborated with Meta and more than 120 other open-source leaders to launch the AI Alliance, a group whose mission is to build and support open technology for AI and the open communities that will enable it to benefit all of us,” Rossi shared.

As AI becomes increasingly interconnected and embedded in our lives, new ethical challenges will arise. Rossi highlighted the importance of focusing on trust in the era of powerful foundation models.

“In keeping with our focus on trustworthy AI, IBM is developing solutions for the next challenges such as robustness, uncertainty quantification, explainability, data drift, privacy, and concept drift in AI models,” she said.

The TLDR version of my interview with IBM’s Francesca Rossi constantly underscores a fundamental truth: ethical considerations in AI are not optional – they’re essential for sustainable success. As Rossi aptly put it, “These considerations are not in opposition to profit but are rather essential for sustainable success.”

As AI’s influence is only set to grow, Francesca Rossi’s insights offer a roadmap for navigating the complex ethical landscape of AI. It’s an effort that demands transparency, collaboration, and an unwavering commitment to building systems that not only advance technology but also uphold the values that define us as a society. A collective effort that involves policymakers, educators, and industry leaders here in India and around the world.

Also read: IBM reveals faster Heron R2 quantum computing chip: Why this matters

]]>
AI in Windows: Microsoft’s Anand Jethalia on securing future of PC https://www.digit.in/features/general/ai-in-windows-microsofts-anand-jethalia-on-securing-future-of-pc.html Fri, 22 Nov 2024 05:42:25 +0000 https://www.digit.in/?p=667488 Generative AI is reshaping industries at an unprecedented pace, we all can attest to this phenomenon. Which is why cybersecurity matters more than ever before, especially on our personal computing devices. As Windows continues to incorporate AI-driven features, from intelligent security protocols to enhanced user functionalities, the operating system is redefining what personal computing means for all of us. To understand how AI is transforming Windows and its implications for cybersecurity, I interviewed Anand Jethalia, Country Head of Cybersecurity at Microsoft India & South Asia.

Also read: Cybersecurity in Age of AI: Black Hat 2024’s top 3 LLM security risks

Our conversation delved into the escalating role of AI in both fortifying and challenging Windows security. With cyber threats growing in sophistication, AI emerges as a double-edged sword – empowering defenders while equipping adversaries alike. Anand shares insights on how Microsoft leverages AI to protect Windows users globally, the innovations on the horizon, and how individuals can navigate this new era where AI and Windows converge to shape the future of personal and enterprise security. Edited excerpts follow:

Q) With advancements in AI, how do you see the cyber threat landscape?

AI has fundamentally reshaped the cybersecurity landscape, acting as both a powerful defence mechanism and a tool for increasingly sophisticated threats. On the defence side, AI has revolutionised how organisations detect and respond to cyber risks by enabling real-time analysis of vast data sets and uncovering patterns and anomalies indicative of potential breaches. We’ve seen the evolution of AI from the early rules-based systems to machine learning, and now to the advent of generative AI. 

However, AI’s potential is not exclusive to defenders. Cybercriminals, including nation-state actors and sophisticated criminal enterprises, are increasingly exploiting AI to automate and scale their attacks, making them more efficient and harder to detect. They are using AI to mimic legitimate behaviours, automate cyberattacks, and identify new vulnerabilities, amplifying the threat landscape. The inadvertent leakage of sensitive data through AI prompts also poses a growing concern.

As we look to the future, the role of AI in security will expand even further. AI will drive improvements in threat detection, reduce false positives, and automate routine tasks, all while fortifying organisations’ overall security posture. Yet, as cyber adversaries continue to evolve, our collective investment in AI and its integration into security strategies will be critical to staying ahead of these sophisticated threats. Security professionals will remain indispensable, focusing on advanced incident response and proactive threat hunting, with AI as a powerful ally.

Q) How is AI changing the way we protect ourselves on phones and laptops?

AI is truly transforming how we protect personal devices, like phones and laptops, from cyberthreats. At Microsoft, we’re leveraging AI to enhance both detection and prevention, using machine learning algorithms that analyse data in real time to block threats before they cause harm. 

For instance, supervised learning allows us to recognize known threats, such as malware, by detecting their unique signatures. Unsupervised learning takes it a step further, identifying emerging threats by spotting abnormal patterns that don’t have known signatures. We also use AI-powered user behaviour analytics to monitor for suspicious activity that could indicate compromised accounts.

When it comes to personal devices, AI plays a key role in boosting endpoint security – whether it’s identifying vulnerabilities, detecting malware, or preventing unauthorised data transfers. Additionally, AI-driven next-generation firewalls and intrusion detection systems are helping us tap into threat intelligence to stay ahead of novel cyberattacks.

Also read: India’s cybersecurity crisis: Expensive breaches keep rising

At the core of our approach is Microsoft’s Zero Trust security model, which continuously validates device trustworthiness, ensuring that every device accessing company resources is secure. We also emphasise advanced authentication, like passwordless access through Azure Active Directory, which enhances both security and user experience. Our integrated security solutions – such as Microsoft Sentinel and Microsoft 365 Defender—offer proactive protection, while Microsoft Purview ensures data governance and insider risk mitigation. We focus on security from the ground up, embedding privacy and control into every aspect of our technology.

Q) What steps does Microsoft take to ensure that AI-driven security tools protect users’ privacy, especially here in India? 

At Microsoft, we prioritise protecting user privacy, including that of our customers in India, through a multi-layered approach built on AI-driven security tools. Our AI-powered tools, such as Microsoft Defender and Microsoft Purview, are developed with privacy by design principles, meaning privacy considerations are embedded at every stage of development. We use advanced encryption to safeguard data both at rest and in transit, ensuring a high level of protection. Moreover, we provide transparency reports and user consent mechanisms, empowering users to understand how their data is used and enabling them to control privacy settings.

For instance, Surface devices come equipped with robust AI-driven security features, including Windows Security for real-time malware protection and Windows Hello for passwordless authentication via facial recognition or biometrics, reducing the risk of credential theft. BitLocker encryption adds an extra layer of protection for sensitive data, ensuring it remains secure. Our Windows operating system security, with features like Secure Boot and Windows Defender System Guard, further strengthens protection against firmware attacks, safeguarding users’ devices from a wide array of cyberthreats. This comprehensive approach ensures that our AI-driven tools not only enhance cybersecurity but also uphold user privacy at every level.

Q) What innovations in cybersecurity should we look forward to?

Cybersecurity is a constantly evolving field that is changing to better detect and respond to attacks.  One of the gamechangers in security will be the integration of AI and machine learning into cybersecurity systems, enabling more proactive threat detection and response. AI’s transformative power is rapidly shaping a new generation of cybersecurity tools, tactics, creating new opportunities at an accelerating pace. These technologies will help identify and neutralise threats in real-time, often before the user is even aware, by analysing patterns and anomalies in data. 

Q) What ways can businesses ensure that they stay resilient in the ever-evolving threat landscape? 

To stay resilient in today’s dynamic threat landscape, digital enterprises must adopt a proactive, multi-layered approach to cybersecurity.

With AI in security, businesses can instantly detect anomalies, respond swiftly to mitigate risks, and customise defences to their unique needs. Last year we launched the Secure Future Initiative to help protect against customers, industry and ourselves emerging threats. This initiative boils down to three key principles: building technologies that are secure by design, by default, and in operation. It’s the largest cybersecurity engineering effort in history-a multiyear commitment that has the equivalent of 34,000 full-time engineers dedicated to it. 

Also read: AI impact on cybersecurity future: The good, bad and ugly

However, cybersecurity isn’t just a technical matter; it’s a human one. Organisations must invest in ongoing employee training to recognize phishing, social engineering, and other tactics, reducing the risk of human error that can lead to breaches.

Q) What role do you see humans playing in cybersecurity in the future? How can today’s youth prepare for careers in a field where AI is taking a bigger role?

As AI becomes increasingly integral to cybersecurity, the role of humans remains crucial, particularly in areas requiring strategic thinking, ethical judgement, and creativity – qualities that AI cannot fully replicate. While AI excels at handling repetitive tasks such as threat detection, data analysis, and pattern recognition on a large scale, human expertise is indispensable for interpreting these results, making nuanced decisions, and addressing sophisticated attacks that require contextual understanding and insight.

Humans will play a key role in cyber threat intelligence and strategy. Although AI can identify potential threats, humans are needed to grasp the broader implications, develop long-term defence strategies, and adapt security policies to evolving global risks. Human analysts are also essential for tackling zero-day vulnerabilities and targeted attacks that demand innovative problem-solving.

For young individuals aiming to enter the field, proficiency in AI and a solid understanding of cybersecurity will be essential. Aspiring professionals should concentrate on building a robust foundation in AI and machine learning, alongside traditional cybersecurity concepts such as network security, cryptography, and risk management.

In a future where AI handles numerous operational tasks, humans will continue to be the strategic leaders, ethical guides, and creative problem solvers in cybersecurity. Young individuals preparing for this field should embrace both technological expertise and broader skills, enabling them to make unique contributions in a dynamic, AI-enhanced environment.

Also read: CrowdStrike BSOD error: Risking future of AI in cybersecurity?

]]>
Windows 365 Link: Microsoft’s compact cloud PC to rival Apple Mac mini https://www.digit.in/features/general/windows-365-link-microsofts-compact-cloud-pc-to-rival-apple-mac-mini.html Wed, 20 Nov 2024 08:44:07 +0000 https://www.digit.in/?p=666114 Taking a serious swing at miniaturising the good old desktop PC, Microsoft has announced the Windows 365 Link – a compact, fanless Windows PC aimed at connecting users directly to their Windows 365 Cloud PC in seconds. While it’s currently aimed at business users in medium or large organisations – a more cloud-native version of a thin client – I can’t help but wonder at the potential for this device to become Microsoft’s response to Apple’s Mac mini in the consumer market. 

I mean why not? The Mac mini’s been a favourite among those looking for a small yet powerful desktop computer without breaking the bank for a long time now. So what’s stopping Microsoft from extending the Windows 365 Link to everyday consumers like you and me? 

The Windows 365 Link is an interesting piece of hardware. Yes, it has its pros and cons from an end user perspective (which I’ve highlighted further in the article), but it should deserve a spot in retail stores alongside its intended business-focused rollout.

At first glance, the Windows 365 Link is a small, unassuming device. Measuring just 4.72-in x 4.72-in x 1.18-in, it’s a compact box that can easily sit on a desk or disappear completely from view by being mounted behind a monitor. It’s designed to boot up in seconds and provide instant access to a Windows 11 desktop streamed from the cloud – that’s right, it doesn’t have Windows 11 natively present on its internal storage. The device comes equipped with an array of ports, including three USB-A 3.2 ports, one USB-C 3.2 port, HDMI and DisplayPort outputs, an Ethernet jack, Wi-Fi 6E, and Bluetooth 5.3 connectivity.

As per Microsoft’s announcement, the Windows 365 Link is intended exclusively for businesses and organisations – at least for now. Microsoft plans to launch it in a limited release in December 2024, and make it more generally available in April 2025, pricing starting at $349 in select markets. So, where does that leave the average consumer?

Let’s face it, at $349 starting price, the Windows 365 Link is inexpensive and doesn’t cost a fortune. It offers an accessible price point for consumers looking to experience Windows 11 without investing in a full-fledged PC. For students, freelancers, or anyone on a budget, this device could be an economical way to access a non-smartphone OS-based computing environment.

The Windows 365 Link’s small size makes it ideal for minimalist users – both in terms of desk space and overall computing experience. It’s comparable to the Mac mini in size and could easily blend into a home office or living room environment. Anyone who’s online in 2024 is already a heavy user of cloud-based applications, whether it’s OTT streaming services or Google Docs – the Windows 365 Link just takes this a step further by streaming the entire Windows 11 desktop through the internet. This means all your PC settings, applications, and files are accessible from anywhere, irrespective of any hardware limitations.

For less tech-savvy consumers, think of elders in your family (for instance), the Windows 365 Link reduces potential headaches. In theory, your offline device isn’t a single point of failure for all of your data anymore. Since everything’s in the cloud, Windows operating system updates and cybersecurity events are handled automatically as well.

Of course, the Windows 365 Link isn’t without its drawbacks – the biggest one being its reliance on a consistent (and high-speed) internet connection. Since the device streams Windows 11 operating system from the cloud, it’s essentially a brick if your internet’s down or if it’s slow (not above a certain recommended speed threshold). Naturally, for consumers in areas with erratic and unreliable internet, this computing device is simply a non-starter.

Unlike traditional desktop PCs, or the Mac mini for that matter, the Windows 365 Link doesn’t store data or applications locally. Which means if your internet goes down, so does your ability to use the device – this limitation could be a big no-no for users who need access to their computers at all times, of course.

Total cost of ownership is another concern, because accessing Windows 365 requires a subscription right now, which could add to the overall cost of owning the Windows 365 Link for the average consumer. Paying more and not having the ability to customise and tweak the system to their liking pretty much makes this device not worthy of Windows power users as well.

Despite the cons, there’s a compelling case for Microsoft to bring the Windows 365 Link to the consumer market. By offering an affordable, easy-to-use device, Microsoft could empower more people to participate in the digital world. Students, senior citizens, and developing regions on the wrong side of the digital divide could benefit immensely from such computing platforms.

Apple’s Mac mini has demonstrated over the years that there’s consumer appetite for a sleek, compact desktop experience. Windows 365 Link gives Microsoft the opportunity to tap into this market segment, offering a Windows-based alternative. 

As a long-time observer and user of Microsoft’s products through the 1990s, I definitely see the Windows 365 Link as more than just a business tool. It’s a window into the future of computing, where local and cloud-based experiences converge. For Microsoft, the Windows 365 Link presents an opportunity to further refine what personal computing means for end users.

Of course, challenges exist. Internet infrastructure varies widely, and not all users are ready to embrace a fully cloud-dependent device. However, with thoughtful implementation and perhaps hybrid solutions that offer some offline capabilities, Microsoft could mitigate these concerns.

At least in my humble view, the Windows 365 Link has all the makings of a device that could leave its mark on the consumer PC market. By addressing the cons and leveraging the pros, Microsoft has the chance to offer a compelling alternative to the Mac mini and other compact PCs in the market. As we move toward a more digitally connected and cloud-centric world, devices like the Windows 365 Link could become the norm rather than the exception.

Also read: Mac Mini M4: Apple’s unexpected gaming console?

]]>