AI Explainability and Its Immediate Impact on Legal Tech – Insights from Expert Discussion  

0


Last week, leading experts from academia, industry, and regulatory backgrounds gathered to discuss the legal and commercial implications of AI explainability, with a particular focus on its impact in retail. Hosted by Professor Shlomit Yaniski Ravid of Yale Law and Fordham Law, the panel brought together thought leaders to address the growing need for transparency in AI-driven decision-making, emphasising the importance of ensuring AI operates in ethical and legal parameters and the need to ‘open the black box’ of AI decision-making.

Regulatory challenges and the new AI standard ISO 42001

Tony Porter, former Surveillance Camera Commissioner for the UK Home Office, provided insights into regulatory challenges surrounding AI transparency. He highlighted the significance of ISO 42001, the international standard for AI management systems which offers a framework for responsible AI governance. “Regulations are evolving rapidly, but standards like ISO 42001 provide organisations with a structured approach to balancing innovation with accountability,” Porter said. The panel dissociation led by Prof. Yaniski Ravid featured representatives from leading AI companies, who shared how their organisations implement transparency in AI systems, particularly in retail and legal applications.

Chamelio: Transforming legal decision-making with explainable AI

Alex Zilberman from Chamelio, a legal intelligence platform exclusively built for in-house legal teams, addressed the role of AI in corporate legal operations. Chamelio changes how in-house legal teams operate through an AI agent that learns and uses the legal knowledge stored in its repository of contracts, policies, compliance documents, corporate records, regulatory filings, and other business-important legal documents.

Chamelio’s AI agent performs core legal tasks like extracting important obligations, streamlines contract reviews, monitors compliance, and delivers actionable insights that would otherwise remain buried in thousands of pages of documents. The platform integrates with existing tools and adapts to a team’s legal knowledge.

“Trust is the number one requirement to build a system that professionals can use,” Zilberman said. “This trust is achieved by providing as much transparency as possible. Our solution allows users to understand where each recommendation comes from, ensuring they can confirm and verify every insight.”

Chamelio avoids the ‘black box’ model by letting legal professionals trace the reasoning behind AI-generated recommendations. For example, when the system encounters areas of a contract that it doesn’t recognise, instead of guessing, it flags the uncertainty and requests human input. This approach helps legal professionals control important decisions, particularly in unprecedented scenarios like clauses with no precedent or conflicting legal terms.

Buffers.ai: Changing inventory optimisation

Pini Usha from Buffers.ai shared insights on AI-driven inventory optimisation, an important application in retail. Buffers.ai serves medium to large retail and manufacturing brands, including H&M, P&G, and Toshiba, helping retailers – particularly in the fashion industry – tackle inventory optimisation challenges like forecasting, replenishment, and assortment planning. The company helps ensure the right product quantities are delivered to the correct locations, reducing instances of stockouts and excess inventory.

Buffers.ai offers a full-SaaS ERP plugin that integrates with systems like SAP and Priority, providing ROI in months. “Transparency is key. If businesses cannot understand how AI predicts demand fluctuations or supply chain risks, they will be hesitant to rely on it,” Usha said.

Buffers.ai integrates explainability tools that allow clients to visualise and adjust AI-driven forecasts, helping ensure alignment with real-time business operations and market trends. For example, when placing a new product with no historical data, the system analyses similar product trends, store characteristics, and local demand signals. If a branch has historically shown strong demand for comparable items, the system might recommend a higher quantity without any existing data for the new product. Similarly, when allocating inventory between branches and online stores, the system details factors like regional sales performance, customer traffic patterns, and online conversion rates to explain its recommendations.

Corsight AI: Facial recognition in retail and law enforcement

Matan Noga from Corsight AI discussed the role of explainability in facial recognition technology, which is used increasingly for security and customer experience enhancement in retail. Corsight AI specialises in real-world facial recognition, and provides its solutions to law enforcement, airports, malls, and retailers.

The company’s technology is used for applications like watchlist alerting, locating missing persons, and forensic investigations. Corsight AI differentiates itself by focusing on high-speed, and real-time recognition in ways compliant with evolving privacy laws and ethical AI guidelines. The company works with government and its commercial clients to promote responsible AI adoption, emphasising the importance of explainability in building trust and ensuring ethical use.

ImiSight: AI-powered image intelligence

Daphne Tapia from ImiSight highlighted the importance of explainability in AI-powered image intelligence, particularly in high-stakes applications like border security and environmental monitoring. ImiSight specialises in multi-sensor integration and analysis, utilising AI/ML algorithms to detect changes, anomalies, and objects in sectors like land encroachment, environmental monitoring, and infrastructure maintenance. “AI explainability means understanding why a specific object or change was detected. We prioritise traceability and transparency to ensure users can trust our system’s outputs,” Tapia said. ImiSight continuously refines its models based on real-world data and user feedback. The company collaborates with regulatory agencies to ensure its AI meets international compliance standards.

The panel underscored the important role of AI explainability in fostering trust, accountability, and ethical use of AI technologies, particularly in retail and other high-stakes industries. By prioritising transparency and human oversight, organisations can ensure AI systems are both effective and trustworthy, aligning with evolving regulatory standards and public expectations.

Watch the full session here



Source link

You might also like
Leave A Reply

Your email address will not be published.