In our new paper we ran an experiment at Procter and Gamble with 776 experienced professionals solving real business problems. We found that individuals randomly assiged to use AI did as well as a team of two without AI. And AI-augmented teams produced more exceptional solutions. The teams using AI were happier as well. Even more interesting: AI broke down professional silos. R&D people with AI produced more commercial work and commercial people with AI had more technical solutions. The standard model of "AI as productivity tool" may be too limiting. Todayâs AI can function as a kind of teammate, offering better performance, expertise sharing, and even positive emotional experiences. This was a massive team effort with work led by Fabrizio Dell'Acqua, Charles Ayoubi, and Karim Lakhani along with Hila Lifshitz, Raffaella Sadun, Lilach M., me and our partners at P&G: Yi Han, Jeff Goldman, Hari Nair and Stewart Taub Subatack about the work here: https://lnkd.in/ehJr8CxM Paper: https://lnkd.in/e-ZGZmW9
Artificial Intelligence
Explore top LinkedIn content from expert professionals.
-
-
Over the last year, Iâve seen many people fall into the same trap: They launch an AI-powered agent (chatbot, assistant, support tool, etc.)⦠But only track surface-level KPIs â like response time or number of users. Thatâs not enough. To create AI systems that actually deliver value, we need ðµð¼ð¹ð¶ððð¶ð°, ðµððºð®ð»-ð°ð²ð»ðð¿ð¶ð° ðºð²ðð¿ð¶ð°ð that reflect: ⢠User trust ⢠Task success ⢠Business impact ⢠Experience quality   This infographic highlights 15 ð¦ð´ð´ð¦ð¯ðµðªð¢ð dimensions to consider: â³ ð¥ð²ðð½ð¼ð»ðð² ðð°ð°ðð¿ð®ð°ð â Are your AI answers actually useful and correct? â³ ð§ð®ðð¸ ðð¼ðºð½ð¹ð²ðð¶ð¼ð» ð¥ð®ðð² â Can the agent complete full workflows, not just answer trivia? â³ ðð®ðð²ð»ð°ð â Response speed still matters, especially in production. â³ ð¨ðð²ð¿ ðð»ð´ð®ð´ð²ðºð²ð»ð â How often are users returning or interacting meaningfully? â³ ð¦ðð°ð°ð²ðð ð¥ð®ðð² â Did the user achieve their goal? This is your north star. â³ ðð¿ð¿ð¼ð¿ ð¥ð®ðð² â Irrelevant or wrong responses? Thatâs friction. â³ ð¦ð²ððð¶ð¼ð» ððð¿ð®ðð¶ð¼ð» â Longer isnât always better â it depends on the goal. â³ ð¨ðð²ð¿ ð¥ð²ðð²ð»ðð¶ð¼ð» â Are users coming back ð¢ð§ðµð¦ð³ the first experience? â³ ðð¼ðð ð½ð²ð¿ ðð»ðð²ð¿ð®ð°ðð¶ð¼ð» â Especially critical at scale. Budget-wise agents win. â³ ðð¼ð»ðð²ð¿ðð®ðð¶ð¼ð» ðð²ð½ððµ â Can the agent handle follow-ups and multi-turn dialogue? â³ ð¨ðð²ð¿ ð¦ð®ðð¶ðð³ð®ð°ðð¶ð¼ð» ð¦ð°ð¼ð¿ð² â Feedback from actual users is gold. â³ ðð¼ð»ðð²ð ððð®ð¹ ð¨ð»ð±ð²ð¿ððð®ð»ð±ð¶ð»ð´ â Can your AI ð³ð¦ð®ð¦ð®ð£ð¦ð³ ð¢ð¯ð¥ ð³ð¦ð§ð¦ð³ to earlier inputs? â³ ð¦ð°ð®ð¹ð®ð¯ð¶ð¹ð¶ðð â Can it handle volume ð¸ðªðµð©ð°ð¶ðµ degrading performance? â³ ðð»ð¼ðð¹ð²ð±ð´ð² ð¥ð²ðð¿ð¶ð²ðð®ð¹ ðð³ð³ð¶ð°ð¶ð²ð»ð°ð â This is key for RAG-based agents. â³ ðð±ð®ð½ðð®ð¯ð¶ð¹ð¶ðð ð¦ð°ð¼ð¿ð² â Is your AI learning and improving over time? If you're building or managing AI agents â bookmark this. Whether it's a support bot, GenAI assistant, or a multi-agent system â these are the metrics that will shape real-world success. ðð¶ð± ð ðºð¶ðð ð®ð»ð ð°ð¿ð¶ðð¶ð°ð®ð¹ ð¼ð»ð²ð ðð¼ð ððð² ð¶ð» ðð¼ðð¿ ð½ð¿ð¼ð·ð²ð°ðð? Letâs make this list even stronger â drop your thoughts ð
-
I built an AI Data Visualization AI Agent that writes its own code...𤯠And it's completely opensource. Here's what it can do: 1. Natural Language Analysis â³ Upload any dataset â³ Ask questions in plain English â³ Get instant visualizations â³ Follow up with more questions 2. Smart Viz Selection â³ Automatically picks the right chart type â³ Handles complex statistical plots â³ Customizes formatting for clarity The AI agent: â Understands your question â Writes the visualization code â Creates the perfect chart â Explains what it found Choose the one that fits your needs: â Meta-Llama 3.1 405B for heavy lifting â DeepSeek V3 for deep insights â Qwen 2.5 7B for speed â Meta-Llama 3.3 70B for complex queries No more struggling with visualization libraries. No more debugging data processing code. No more switching between tools. The best part? I've included a step-by-step tutorial with 100% opensource code. Want to try it yourself? Link to the tutorial and GitHub repo in the comments. P.S. I create these tutorials and opensource them for free. Your ð like and â»ï¸ repost helps keep me going. Don't forget to follow me Shubham Saboo for daily tips and tutorials on LLMs, RAG and AI Agents.
-
Last week, I talked about the possibilities of AI to make work easier. This week, I want to share a clear example of how we are doing that at HubSpot. Weâre focused on helping our customers grow. So naturally, we take customer support seriously. Whether itâs a product question or a business challenge, we want inquiries to be answered efficiently and thoughtfully. We knew AI could help, but we didnât know quite what it would look like! We first deployed AI in website and support chat. To mitigate any growing pains, we had a customer rep standing by for questions that came through who could quickly take the baton if things went sideways. And, sometimes they did. But we didnât panic. We listened, we improved, and we kept testing. The more data AI collects, the better it gets. Today, 83% of the chat on HubSpotâs website is AI-managed and our Chatbot is digitally resolving about 30% of incoming tickets. Thatâs an enormous gain in productivity! Our customer reps have more time to focus on complex, high touch questions. AI also helps us quickly identify trendsâquestions or issues that are being raised more frequentlyâso we can intervene early. In other words, AI has not just transformed our customer support. It has elevated it. So, here is what we learned: Donât panic if customer experience gets worse initially! It will improve as your data evolves. Evolve your KPIs and how you measure success- if AI resolves typical questions and your team resolves tricky ones, they will need more time. Use AI to elevate your team's efforts How are you using AI in support? What are you learning?Â
-
Facial recognition software used to misidentify dark-skinned women 47% of the time. Until Joy Buolamwini forced Big Tech to fix it. In 2015, Dr. Joy Buolamwini was building an art project at the MIT Media Lab. It was supposed to use facial recognition to project the face of an inspiring figure onto the userâs reflection. But the software couldnât detect her face. Joy is a dark-skinned woman. And to be seen by the system, she had to put on a white mask. She wondered: Why? She launched Gender Shades, a research project that audited commercial facial recognition systems from IBM, Microsoft, and Face++. The systems could identify lighter-skinned men with 99.2% accuracy. But for darker-skinned women, the error rate jumped as high as 47%. The problem? AI was being trained on biased datasets: over 75% male, 80% lighter-skinned. So Joy introduced the Pilot Parliaments Benchmark, a new training dataset with diverse representation by gender and skin tone. It became a model for how to test facial recognition fairly. Her research prompted Microsoft and IBM to revise their algorithms. Amazon tried to discredit her work. But she kept going. In 2016, she founded the Algorithmic Justice League, a nonprofit dedicated to challenging bias in AI through research, advocacy, and art. She called it the Coded Gaze, the embedded bias of the people behind the code. Her spoken-word film âAI, Ainât I A Woman?â, which shows facial recognition software misidentifying icons like Michelle Obama, has been screened around the world. And her work was featured in the award-winning documentary Coded Bias, now on Netflix. In 2019, she testified before Congress about the dangers of facial recognition. She warned that even if accuracy improves, the tech can still be abused. For surveillance, racial profiling, and discrimination in hiring, housing, and criminal justice. To counter it, she co-founded the Safe Face Pledge, which demands ethical boundaries for facial recognition. No weaponization. No use by law enforcement without oversight. After years of activism, major players (IBM, Microsoft, Amazon) paused facial recognition sales to law enforcement. In 2023, she published her best-selling book âUnmasking AI: My Mission to Protect What Is Human in a World of Machines.â She advocated for inclusive datasets, independent audits, and laws that protect marginalized communities. She consulted with the White House ahead of Executive Order 14110 on âSafe, Secure, and Trustworthy AI.â But she didnât stop at facial recognition. She launched Voicing Erasure, a project exposing bias in voice AI systems like Siri and Alexa. Especially their failure to recognize African-American Vernacular English. Her message is clear: AI doesnât just reflect society. It amplifies its flaws. Fortune calls her âthe conscience of the AI revolution.â ð¡ In 2025, Iâm sharing 365 stories of women entrepreneurs in 365 days. Follow Justine Juillard for daily #femalefounder spotlights.
-
I spent 3+ hours in the last 2 weeks putting together this no-nonsense curriculum so you can break into AI as a software engineer in 2025. This post (plus flowchart) gives you the latest AI trends, core skills, and tool stack youâll need. I want to see how you use this to level up. Save it, share it, and take action. ⦠1. LLMs (Large Language Models) This is the core of almost every AI product right now. think ChatGPT, Claude, Gemini. To be valuable here, you need to: âDesign great prompts (zero-shot, CoT, role-based) âFine-tune models (LoRA, QLoRA, PEFT, this is how you adapt LLMs for your use case) âUnderstand embeddings for smarter search and context âMaster function calling (hooking models up to tools/APIs in your stack) âHandle hallucinations (trust me, this is a must in prod) Tools: OpenAI GPT-4o, Claude, Gemini, Hugging Face Transformers, Cohere ⦠2. RAG (Retrieval-Augmented Generation) This is the backbone of every AI assistant/chatbot that needs to answer questions with real data (not just model memory). Key skills: -Chunking & indexing docs for vector DBs -Building smart search/retrieval pipelines -Injecting context on the fly (dynamic context) -Multi-source data retrieval (APIs, files, web scraping) -Prompt engineering for grounded, truthful responses Tools: FAISS, Pinecone, LangChain, Weaviate, ChromaDB, Haystack ⦠3. Agentic AI & AI Agents Forget single bots. The future is teams of agents coordinating to get stuff done, think automated research, scheduling, or workflows. What to learn: -Agent design (planner/executor/researcher roles) -Long-term memory (episodic, context tracking) -Multi-agent communication & messaging -Feedback loops (self-improvement, error handling) -Tool orchestration (using APIs, CRMs, plugins) Tools: CrewAI, LangGraph, AgentOps, FlowiseAI, Superagent, ReAct Framework ⦠4. AI Engineer You need to be able to ship, not just prototype. Get good at: -Designing & orchestrating AI workflows (combine LLMs + tools + memory) -Deploying models and managing versions -Securing API access & gateway management -CI/CD for AI (test, deploy, monitor) -Cost and latency optimization in prod -Responsible AI (privacy, explainability, fairness) Tools: Docker, FastAPI, Hugging Face Hub, Vercel, LangSmith, OpenAI API, Cloudflare Workers, GitHub Copilot ⦠5. ML Engineer Old-school but essential. AI teams always need: -Data cleaning & feature engineering -Classical ML (XGBoost, SVM, Trees) -Deep learning (TensorFlow, PyTorch) -Model evaluation & cross-validation -Hyperparameter optimization -MLOps (tracking, deployment, experiment logging) -Scaling on cloud Tools: scikit-learn, TensorFlow, PyTorch, MLflow, Vertex AI, Apache Airflow, DVC, Kubeflow
-
In January, everyone signs up for the gym, but you're not going to run a marathon in two or three months. The same applies to AI adoption. I've been watching enterprises rush into AI transformations, desperate not to be left behind. Board members demanding AI initiatives, executives asking for strategies, everyone scrambling to deploy the shiniest new capabilities. But here's the uncomfortable truth I've learned from 13+ years deploying AI at scale: Without organizational maturity, AI strategy isnât strategy â itâs sophisticated guesswork. Before I recommend a single AI initiative, I assess five critical dimensions: 1. ðð»ð³ð¿ð®ððð¿ðð°ððð¿ð²: Can your systems handle AI workloads? Or are you struggling with basic data connectivity? 2. ðð®ðð® ð²ð°ð¼ððððð²ðº: Is your data accessible? Or scattered across 76 different source systems? 3. ð§ð®ð¹ð²ð»ð ð®ðð®ð¶ð¹ð®ð¯ð¶ð¹ð¶ðð: Do you have the right people with capacity to focus? Or are your best people already spread across 14 other strategic priorities? 4. ð¥ð¶ðð¸ ðð¼ð¹ð²ð¿ð®ð»ð°ð²: Is your culture ready to experiment? Or is it still âmeasure three times, cut onceâ? 5. ððð»ð±ð¶ð»ð´ ð®ð¹ð¶ð´ð»ðºð²ð»ð: Are you willing to invest not just in tools, but in the foundational capabilities needed for success? This maturity assessment directly informs which of five AI strategies you can realistically execute: - Efficiency-based - Effectiveness-based - Productivity-based - Growth-based - Expert-based Here's my approach that's worked across 39+ production deployments: Think big, start small, scale fast. Or more simply: ðð¿ð®ðð¹. ðªð®ð¹ð¸. ð¥ðð». The companies stuck in POC purgatory? They sprinted before they could stand. So remember: AI is a muscle that has to be developed. You don't go from couch to marathon in a month, and you don't go from legacy systems to enterprise-wide AI transformation overnight. What's your organization's AI fitness level? Are you crawling, walking, or ready to run?
-
Did you see the recent news??? Microsoft recently unveiled its latest AI Diagnostic Orchestrator (MAI DxO), reporting an impressive 85.5% accuracy on 304 particularly complex cases from the New England Journal of Medicine, compared to just ~20% for physicians under controlled conditions . These resultsâquadrupling the diagnostic accuracy of human clinicians and more cost-effective than standard pathways â have gotten a lot of buzz. They may mark a significant milestone in clinical decision support and raise both enthusiasm but also caution. Some perspective as we continue to determine the role of AI in healthcare. 1. Validation Is Essential Promising results in controlled settings are just the beginning. We urge Microsoft and others to pursue transparent, peer reviewed clinical studies, including real-world trials comparing AI-assisted workflows against standard clinician performanceâideally published in clinical journals. 2. Recognize the value of PatientâPhysician Relations Even the most advanced AI cannot replicate the human touchâlistening, interpreting, and guiding patients through uncertainty. Physicians must retain control, using AI as a tool, not a crutch. 3. Acknowledge Potential Bias AI is only as strong as its training data. We must ensure representation across demographics and guard against replicating systemic biases. Transparency in model design and evaluation standards is non-negotiable. 4. Regulatory & Liability Frameworks As AI enters clinical care, we need clear pathways from FDA approval to liability guidelines. The AMA is actively engaging with regulators, insurers, and health systems to craft policies that ensure safety, data integrity, and professional accountability. 5. Prioritize Clinician Wellness Tools that reduce diagnostic uncertainty and documentation burden can strengthen clinician well-being. But meaningful adoption requires integration with workflow, training, and ongoing support. We need to look at this from a holistic perspective. We need to promote an environment where physicians, patients, and AI systems collaborate, Letâs convene cross sector partnerships across industry, academia, and government to champion AI that empowers clinicians, enhances patient care, and protects public health. Letâs embrace innovationânot as a replacement for human care, but as its greatest ally. #healthcare #ai #innovation #physicians https://lnkd.in/ew-j7yNS
-
This new white paper by Stanford Institute for Human-Centered Artificial Intelligence (HAI) titled "Rethinking Privacy in the AI Era" addresses the intersection of data privacy and AI development, highlighting the challenges and proposing solutions for mitigating privacy risks. It outlines the current data protection landscape, including the Fair Information Practice Principles, GDPR, and U.S. state privacy laws, and discusses the distinction and regulatory implications between predictive and generative AI. The paper argues that AI's reliance on extensive data collection presents unique privacy risks at both individual and societal levels, noting that existing laws are inadequate for the emerging challenges posed by AI systems, because they don't fully tackle the shortcomings of the Fair Information Practice Principles (FIPs) framework or concentrate adequately on the comprehensive data governance measures necessary for regulating data used in AI development. According to the paper, FIPs are outdated and not well-suited for modern data and AI complexities, because: - They do not address the power imbalance between data collectors and individuals. - FIPs fail to enforce data minimization and purpose limitation effectively. - The framework places too much responsibility on individuals for privacy management. - Allows for data collection by default, putting the onus on individuals to opt out. - Focuses on procedural rather than substantive protections. - Struggles with the concepts of consent and legitimate interest, complicating privacy management. It emphasizes the need for new regulatory approaches that go beyond current privacy legislation to effectively manage the risks associated with AI-driven data acquisition and processing. The paper suggests three key strategies to mitigate the privacy harms of AI: 1.) Denormalize Data Collection by Default: Shift from opt-out to opt-in data collection models to facilitate true data minimization. This approach emphasizes "privacy by default" and the need for technical standards and infrastructure that enable meaningful consent mechanisms. 2.) Focus on the AI Data Supply Chain: Enhance privacy and data protection by ensuring dataset transparency and accountability throughout the entire lifecycle of data. This includes a call for regulatory frameworks that address data privacy comprehensively across the data supply chain. 3.) Flip the Script on Personal Data Management: Encourage the development of new governance mechanisms and technical infrastructures, such as data intermediaries and data permissioning systems, to automate and support the exercise of individual data rights and preferences. This strategy aims to empower individuals by facilitating easier management and control of their personal data in the context of AI. by Dr. Jennifer King Caroline Meinhardt Link: https://lnkd.in/dniktn3V
-
MIT and Harvard Medical School researchers just unlocked interactive 3D medical image analysis with language! Medical imaging AI has long been limited to rigid, single-task models that require extensive fine-tuning for each clinical application. ð©ð¼ð ð²ð¹ð£ð¿ð¼ðºð½ð ð¶ð ððµð² ð³ð¶ð¿ðð ðð¶ðð¶ð¼ð»-ð¹ð®ð»ð´ðð®ð´ð² ð®ð´ð²ð»ð ððµð®ð ð²ð»ð®ð¯ð¹ð²ð ð¿ð²ð®ð¹-ðð¶ðºð², ð¶ð»ðð²ð¿ð®ð°ðð¶ðð² ð®ð»ð®ð¹ððð¶ð ð¼ð³ ð¯ð ðºð²ð±ð¶ð°ð®ð¹ ðð°ð®ð»ð ððµð¿ð¼ðð´ðµ ð»ð®ððð¿ð®ð¹ ð¹ð®ð»ð´ðð®ð´ð² ð°ð¼ðºðºð®ð»ð±ð. 1. Unified multiple radiology tasks (segmentation, volume measurement, lesion characterization) within a single, multimodal AI model. 2. Executed complex imaging commands like âcompute tumor growth across visitsâ or âsegment infarcts in MCA territoryâ without additional training. 3. Matched or exceeded specialized models in anatomical segmentation and visual question answering for neuroimaging tasks. 4. Enabled real-time, interactive workflows, allowing clinicians to refine analysis through language inputs instead of manual annotations. Notably, I like that the design includes native-space convolutions that preserve the original acquisition resolution. This addresses a common limitation in medical imaging where resampling can degrade important details. Excited to see agents being introduced more directly into clinician workflows. Here's the awesome work: https://lnkd.in/ggQ4YGeX Congrats to Andrew Hoopes, Victor Ion Butoi, John Guttag, and Adrian V. Dalca! I post my takes on the latest developments in health AI â ð°ð¼ð»ð»ð²ð°ð ðð¶ððµ ðºð² ðð¼ ððð®ð ðð½ð±ð®ðð²ð±! Also, check out my health AI blog here: https://lnkd.in/g3nrQFxW