Gagan Bansal Profile
Gagan Bansal

@bansalg_

1,791
Followers
458
Following
30
Media
455
Statuses

Researcher with focus on improving Human-AI Interaction; Currently @MSFTResearch AI Frontiers; Prev. @uwcse & @uw_hai ; Checkout our work on #AutoGen @pyautogen

Seattle, WA
Joined November 2012
Don't wanna be here? Send us removal request.
Pinned Tweet
@bansalg_
Gagan Bansal
7 months
Working on or interested on agents based on Large Language Models like GPT-4? Follow @pyautogen to get latest news and use cases of our rapidly expanding multi-agent framework #AutoGen !
Tweet media one
0
0
12
@bansalg_
Gagan Bansal
2 years
Looking for research internships on #AI + #HCI ?? Our team, Human-AI eXperiences (HAX) at @MSFTResearch , is hiring multiple interns for summer 2023 on topics related to human-centered AI (see 👇). cc: @SaleemaAmershi @vykthur @adamfourney Apply at
Tweet media one
10
72
247
@bansalg_
Gagan Bansal
4 years
New work on explainable AI! w/ @tongshuangwu , J. Zhu, R. Fok, @besanushi , @ecekamar , M. Ribeiro, and @dsweld . When AIs advise people, does an explanation of its reasoning actually help the person? Does it let the human outperform the AI? Does it ...(1/6)
6
43
217
@bansalg_
Gagan Bansal
6 months
📢📢We are also hiring summer interns!📢📢 Come work with our teams in MSR AI Frontiers to conduct research on interaction between AI agents and people!
8
39
214
@bansalg_
Gagan Bansal
5 months
Our team will start reviewing internship applications on Jan 5th 2024! So please apply and submit material before that!
@bansalg_
Gagan Bansal
6 months
📢📢We are also hiring summer interns!📢📢 Come work with our teams in MSR AI Frontiers to conduct research on interaction between AI agents and people!
8
39
214
2
25
154
@bansalg_
Gagan Bansal
2 years
🚨Announcing #CHI2022 workshop on Trust and Reliance in AI-Human Teams (TRAIT) ft. keynote by @jdlee888 , panel w/ #HCI & #AI experts, group activities & more! w/ @alison_m_smith , @ZanaBucinca , @tongshuangwu , @d19fe8 , @JessicaHullman , @DrSimoneStumpf See:
Tweet media one
3
30
123
@bansalg_
Gagan Bansal
2 years
About 60 researchers brainstorming how to define, measure, and shape trust and reliance in human-AI interaction at the #TRAIT2022 hybrid workshop at #CHI2022 @sigchi
Tweet media one
Tweet media two
3
21
106
@bansalg_
Gagan Bansal
1 year
🚨 Working on #AI + #HCI ? Join the 2nd #CHI2023 workshop on Trust & Reliance in AI-Human Teams (TRAIT), focused on real-world human-AI use cases. Keynote: @Carryveggies & Michael Terry. w/ @alison_m_smith @ZanaBucinca @tongshuangwu @d19fe8 @JessicaHullman @DrSimoneStumpf
Tweet media one
1
29
96
@bansalg_
Gagan Bansal
4 years
Excited to share a draft of our new work on human-centered AI! w/ @besanushi @ecekamar @erichorvitz @dsweld When an AI assists human decision-makers, e.g, by recommending its predictions, is the most accurate AI necessarily the best team-mate? (1/5)
Tweet media one
2
18
84
@bansalg_
Gagan Bansal
2 years
And its a wrap! We had 80 ppl/8 timezones join in a hybrid setting. Thanks to my co-organizers, speakers, pc, & participants for making this workshop possible @alison_m_smith @ZanaBucinca @tongshuangwu @d19fe8 @JessicaHullman @DrSimoneStumpf #TRAIT2022 #CHI2022
Tweet media one
0
11
77
@bansalg_
Gagan Bansal
2 years
Excited to be at #CHI2022 in-person!! DM me if you'd like to catch up there! We can talk about human-AI interaction (or not)!
2
0
68
@bansalg_
Gagan Bansal
6 months
Agents w/ APIs/tools are cool and popular now but throwback to one of my favorite classic papers from 1994 on agents by @etzioni and @dsweld The first time I read and obsessed about it was when I was an undergraduate and saw it as a reference in Russel and Norvig!
Tweet media one
4
10
52
@bansalg_
Gagan Bansal
3 years
Do you work on AI/ML + HCI? We invite submissions for our new journal--- Special Issue on "AI for (and by) the People" Wide range of human + AI topics ✅ Open-source ✅ Virtual workshop post publication ✅ cc: @alison_m_smith , @gonzaloworks
Tweet media one
@alison_m_smith
Alison Renner (Smith)
3 years
Working on interesting problems around humans & #ai ? The "AI for (and by) the People" journal special issue explores the opportunities and challenges of designing and developing AI/ML for people. Deadline: Sep 15. #hcai #hcml #cfp @bansalg_ @gonzaloworks
0
3
10
2
23
43
@bansalg_
Gagan Bansal
7 months
I am super excited to see how this will power more research on not just multi agents workflows but entire new subtopics in human-AI interaction! Checkout our new paper below ⬇️
@Chi_Wang_
Chi Wang
7 months
Imagine if ✨multiple✨ ChatGPT agents could collaborate to solve complex tasks for you! 🧑‍🦱🤝🤖🤖🤖 📢 AutoGen: A new framework for building multi-agent LLM applications It allows creating many agents that converse to solve complex tasks! ... 1/4
Tweet media one
7
50
190
1
10
40
@bansalg_
Gagan Bansal
2 years
My academic Twitter colleagues, I need a tiny favor! If you've ever made a paper co-authored by me a required reading for a course, can you please DM me the details (esp. the course # and year)? It would help me with my US immigration app! (Angy cat from last yr as clickbait)
Tweet media one
5
4
38
@bansalg_
Gagan Bansal
1 year
We expect to begin evaluating applications for 2023 summer internships with our team next week. Apply by early next week for full consideration!
@bansalg_
Gagan Bansal
2 years
Looking for research internships on #AI + #HCI ?? Our team, Human-AI eXperiences (HAX) at @MSFTResearch , is hiring multiple interns for summer 2023 on topics related to human-centered AI (see 👇). cc: @SaleemaAmershi @vykthur @adamfourney Apply at
Tweet media one
10
72
247
0
6
38
@bansalg_
Gagan Bansal
3 years
Looking for interdisciplinary internship on human-centered AI/ML/NLP?? 👇
Tweet media one
@SaleemaAmershi
Saleema Amershi
3 years
**Multiple** internship opportunities to work with the #HAX Team @MSFTResearch in 2022! If you're interested in #ResponsibleAI , #AI #UX , and tools for creating these, apply here: Learn more about our team here:
4
42
149
1
2
31
@bansalg_
Gagan Bansal
11 months
Such wonderful reception of our work on understanding programmer-CoPilot interaction. Has implications for understanding human-LLM interaction in general. cc: @HsseinMzannar @adamfourney @erichorvitz
@DynamicWebPaige
👩‍💻 Paige Bailey
11 months
"Our studies revealed that when solving a coding task with Copilot, programmers may spend a large fraction of total session time (34.3%) on just double-checking and editing suggestions, and spend *more than half* of the task time on Copilot related activities, together indicating
Tweet media one
Tweet media two
Tweet media three
Tweet media four
34
345
2K
0
5
28
@bansalg_
Gagan Bansal
1 year
Checkout our new work on understanding and improving human-LLM interaction! #GitHubCopilot #LLMs
@HsseinMzannar
Hussein Mozannar
1 year
As Copilot becomes more popular, we need to understand how programmers interact with it. We built a model of interaction between Copilot and Programmers named 'CUPS' and predict programmer behavior in our latest paper
Tweet media one
6
29
138
1
2
28
@bansalg_
Gagan Bansal
2 years
We received an overwhelming number of submissions on trust and reliance for human-AI interaction at the TRAIT workshop @ #CHI2022 Even if you are not attending the workshop on 30th April, checkout the accepted papers below!
@alison_m_smith
Alison Renner (Smith)
2 years
Tweet media one
0
16
47
1
3
27
@bansalg_
Gagan Bansal
5 months
Come learn more about agents and AutoGen at the Microsoft booth at #NeurIPS2023 ! @Chi_Wang_ , @qingyun_wu , and I will be there on Monday 12/11, between 9 am-noon CST and 3:30-4:00 pm CST. Location: Booth 1003 - Next to entrance Hall D cc: @pyautogen , @MSFTResearch
@pyautogen
AutoGen
5 months
Working on AI Agents and attending #NeurIPS2023 ?? Don't miss in-person talk by @Chi_Wang_ on @pyautogen Details:
Tweet media one
1
11
40
1
4
26
@bansalg_
Gagan Bansal
6 months
✨How to get multiple Open AI Assistants ( #GPTs ) *and* #AutoGen Agents all working together to solve tasks?✨ To learn, see video by AI Jason This is a good example of why supporting cross-platform agents will become increasingly important!
2
6
27
@bansalg_
Gagan Bansal
1 year
Checkout our new work on improving interaction with LLMs 👇👇👇
@HelenasResearch
Helena Vasconcelos' Research
1 year
New research on improving human-AI interaction! 🌟 LLMs like #CoPilot can be amazing! But they can also suggest erroneous code & verifying their suggestions takes effort. We show that communicating uncertainty reduces these costs! BUT the notion of uncertainty also matters. 1/4
5
24
129
0
7
24
@bansalg_
Gagan Bansal
3 years
New compelling evidence for developing explainable AI! Our user-studies on open-domain QA show that explanations help end-users and outperform calibrated confidence (strong, unbeaten baseline) by a significant margin! That too whilst achieving "complementary" performance!! (1/2)
1
3
24
@bansalg_
Gagan Bansal
6 months
“Orca, a 13-billion language model that demonstrated strong reasoning abilities by imitating the step-by-step reasoning traces of more capable LLMs.”
@MSFTResearch
Microsoft Research
6 months
At Microsoft, we’re expanding AI capabilities by training small language models to achieve the kind of enhanced reasoning and comprehension typically found only in much larger models.
70
243
2K
0
2
22
@bansalg_
Gagan Bansal
2 years
I 100% agree -- its important to understand when explanations help users, for which tasks, and along what metrics. E.g., it’s important to understand when explanations do and don’t lead to appropriate reliance... [1/2]
@hima_lakkaraju
𝙷𝚒𝚖𝚊 𝙻𝚊𝚔𝚔𝚊𝚛𝚊𝚓𝚞
2 years
Just because one user study showed that explanations produced by a method were not helpful for N homogenous users in a particular context, this does not imply that the method in question has no utility in any other setting. It is important to appreciate this nuance [4/N]
1
1
31
1
2
21
@bansalg_
Gagan Bansal
2 years
"The striking difference was that developers who used GitHub Copilot completed the task significantly faster–55% faster than the developers who didn’t use GitHub Copilot." Very promising, real-world results for human-AI interaction!
@irina_kAl
Eirini Kalliamvakou
2 years
Equal parts hard work and exciting work, I'm very glad to be sharing these results! #GitHubCopilot has had such strong impact on developers on many levels, it's a privilege to have front row seats to how we understand and measure that. More to come!
4
13
38
1
1
20
@bansalg_
Gagan Bansal
2 months
Seattle flu gave me insomnia so I thought I'd create an example of #AutoGen feature that I find useful for creating end applications. Here I wanted the agents to find recent GitHub issues on AutoGen's repo and then render a neat markdown table using @willmcgugan 's Rich library.
0
4
22
@bansalg_
Gagan Bansal
2 years
We just started the #TRAIT2022 workshop at #CHI2022 ! Turns out our keynote speaker, John Lee @Jdlee888 wrote his keynote's abstract with with AI assistance (blue indicates contributions by a LLM)! And none us knew until he just told us now!! Amazing!!
Tweet media one
0
1
19
@bansalg_
Gagan Bansal
7 months
Nice example of an online multi-agent playground built using #AutoGen !
Tweet media one
0
4
17
@bansalg_
Gagan Bansal
1 year
I had fun visiting an interacting with colleagues at UCSB! Here are papers I discussed: 1. Modeling users: 2. Communicating Uncertainty: 3. Metrics: Data/code:
@AlfonAmayuelas
Alfonso Amayuelas
1 year
Las week, we had the @ucsbmmi Summit @ UCSB. It was great to listen to @bansalg_ from @MSFTResearch and understand how Copilot is changing the way we code. Users spend 50% of their time interacting with it and 20% verifying suggestions. Coders are 2x faster!
Tweet media one
Tweet media two
Tweet media three
0
1
14
1
1
16
@bansalg_
Gagan Bansal
3 months
More full-time opportunities to work with our team!
@SaleemaAmershi
Saleema Amershi
3 months
📢📢More opportunities on our team at #MicrosoftResearch ! 📢📢 Now hiring for Senior and Principal level Researchers and Software Engineers. If you want to advance the Frontiers of #AI to empower people and AI agents to solve real-world problems, apply below! 👇 Please RT
1
26
78
1
2
16
@bansalg_
Gagan Bansal
2 years
There is still time to submit to the TRAIT workshop at #CHI2022 !!! Submissions are due on Feb 11th.
@bansalg_
Gagan Bansal
2 years
🚨Announcing #CHI2022 workshop on Trust and Reliance in AI-Human Teams (TRAIT) ft. keynote by @jdlee888 , panel w/ #HCI & #AI experts, group activities & more! w/ @alison_m_smith , @ZanaBucinca , @tongshuangwu , @d19fe8 , @JessicaHullman , @DrSimoneStumpf See:
Tweet media one
3
30
123
0
3
16
@bansalg_
Gagan Bansal
7 months
If you are passionate about human-AI interaction and on the job market this year, we are hiring a full-time researcher🧑‍🦱+🤖 See the job post for details below.
@SaleemaAmershi
Saleema Amershi
7 months
📢📢We're hiring!📢📢 If you want to shape the future of AI and empower people and AI agents to collaboratively solve real-world problems, apply here: See also below for more exciting opportunities in #AI at #MSR with our partner teams. 👇👇
7
68
350
0
2
16
@bansalg_
Gagan Bansal
2 years
@JessicaHullman moderates a super interesting panel to evaluate our progress and envision the future of trust and reliance in human-AI Teams with leading AI and HCI researchers @mariadearteaga @SaleemaAmershi @kgajos @tmiller_unimelb @sigchi #TRAIT2022 #CHI2022
Tweet media one
Tweet media two
0
4
15
@bansalg_
Gagan Bansal
4 years
In fact, explanations increased the chance that users will accept its recommendation REGARDLESS of its correctness. Such systems seem deeply unsatisfying and fraught with ethical issues.(5/6)
Tweet media one
1
0
13
@bansalg_
Gagan Bansal
4 months
New tool for benchmarking agents -- AutoGenBench! By @adamfourney , @qingyun_wu , @pyautogen
@besanushi
Besmira Nushi 💙💛
4 months
New blog by @adamfourney and @qingyun_wu on measurement tools for complex multi agent workflows in @pyautogen . AutoGenBench is a command line tool on pypi which handles downloading, configuring, running, and reporting supported benchmarks in AutoGen.➡️
Tweet media one
1
6
28
0
0
12
@bansalg_
Gagan Bansal
2 months
If you work on human-AI interaction and agents, you might find the abstractions introduced in chapter 3 of the new #AutoGen tutorial practical and interesting 👇
@pyautogen
AutoGen
2 months
🚨We just released a new #AutoGen tutorial And with that getting started became even easier! First 5 chapter are already online to help you learn about - agents that can converse - termination - adding humans in-the-loop - code executors - multi-agent patterns Lets us know if
Tweet media one
2
38
183
0
2
12
@bansalg_
Gagan Bansal
7 months
Checkout our new framework for building LLM agents! #AutoGen is already open source and growing very rapidly on Github. You can start using it today! More details soon… #LLMs #Microsoft #AI 👇👇👇
@Chi_Wang_
Chi Wang
7 months
Imagine if ✨multiple✨ ChatGPT agents could collaborate to solve complex tasks for you! 🧑‍🦰🤝🤖🤖🤖 📢 AutoGen: A new framework for building multi-agent LLM applications Repo: Stay tuned for a new AutoGen tech report on 10/5… #AutoGen #AI #LLMs #ML
7
45
163
0
2
11
@bansalg_
Gagan Bansal
3 years
Aptly put by @JessicaHullman -- "So the relationship more explanation = more [appropriate] trust should not be assumed when trust is mentioned as in the NIST report, just like it shouldn’t be assumed that more expression of uncertainty = more [appropriate] trust."
@JessicaHullman
Jessica Hullman
3 years
On NIST principles for explainable AI, and what's similar about these challenges and those in expressing uncertainty in model predictions. I see a lot of parallels despite the big difference in how much hype each gets
1
5
34
0
1
10
@bansalg_
Gagan Bansal
3 years
Thankful for colleagues at @MSFTResearch , #MicrosoftAether , and @uwcse for their commitment to develop reliable people-facing #AI systems! See announcement and link to a new open-source repository on backwards compatible #ML ⬇️
@erichorvitz
Eric Horvitz
3 years
Excited to share code for studying backward compatibility of #ML models—understanding changes in errors w/ model updates. Fun collab w/ @besanushi @bansalg_ @megha_byte @ecekamar @sytelus @dsweld @DeanCarignan & #MicrosoftAether eng team @MSFTResearch @NeurIPSConf #responsibleAI
0
1
14
1
2
10
@bansalg_
Gagan Bansal
2 years
More intense brainstorming sessions with larger groups at #TRAIT2022 #CHI2022
Tweet media one
Tweet media two
0
2
10
@bansalg_
Gagan Bansal
1 year
I remember finding @jennwvaughan 's advice really useful when attending conferences! Her point #8 "One new friend will often lead to many" is still my favorite!
@jennwvaughan
Jenn Wortman Vaughan
1 year
Attending #NeurIPS for the first time? It's been a while since I wrote this, but it's still relevant...
0
9
65
0
1
9
@bansalg_
Gagan Bansal
2 years
We've extended the submission deadline for #CHI2022 TRAIT workshop to Feb 24th!
@bansalg_
Gagan Bansal
2 years
There is still time to submit to the TRAIT workshop at #CHI2022 !!! Submissions are due on Feb 11th.
0
3
16
0
7
9
@bansalg_
Gagan Bansal
7 months
I am already mind blown by the reception of #AutoGen by OSS community! But I am also super excited about the numerous human-AI interaction questions that show up when users interact with and use multiple #LLM agents for their tasks...
@pyautogen
AutoGen
7 months
Join our rapidly growing community!🚀🚀🚀
0
1
15
0
0
9
@bansalg_
Gagan Bansal
3 years
Even expert users are susceptible to inappropriate reliance on AI advice 😮
@MarzyehGhassemi
Marzyeh
3 years
Are radiologists and IM/EM docs more susceptible to incorrect radiology advice when its "from an AI"? Our new paper "Do as AI Say" highlights the potential danger of human/AI advice anchoring. Blog post by author @harini824 !
6
53
228
0
0
8
@bansalg_
Gagan Bansal
6 months
More internships with our close collaborators:
@julia_kiseleva
Julia Kiseleva
6 months
🚀 Exciting summer internship opportunity for PhD students at @MSFTResearch ! Dive into innovative projects like #Orca and @pyautogen offering thrilling research challenges. Ready to be part of groundbreaking AI work? Apply here: #AutoGen #LLMs #AgentEval
3
21
101
2
1
8
@bansalg_
Gagan Bansal
3 years
This was one of the most well-organized workshop I've attended!
@UpolEhsan
Upol Ehsan
3 years
Many of you couldn't join us at the #HCXAI workshop at #CHI2021 . We received tons of requests to make the videos available online. We always want to broaden participation. This is for you. 🎁
Tweet media one
4
17
70
1
0
8
@bansalg_
Gagan Bansal
3 years
Another great example of how AI systems are brittle and can fail in unexpected ways, and the need for human agency, control, and feedback in people-facing systems.
0
0
8
@bansalg_
Gagan Bansal
4 years
Finally caught up with exciting papers on explainable AI at #ICML2020 's Workshop on Human Interpretability #WHI2020 . Here are subset of many papers I liked and a TLDR: (1/4)
1
1
8
@bansalg_
Gagan Bansal
2 months
Highly recommend for any dev or researcher working on AI agents!
0
0
7
@bansalg_
Gagan Bansal
3 years
@peterbhase @__Owen___ Very nice resource! You may be interested in our studies from last summer that show that NONE of the prior works (except for one very recent study on open-domain QA) has observed "complementary" performance from explanations!
@bansalg_
Gagan Bansal
4 years
New work on explainable AI! w/ @tongshuangwu , J. Zhu, R. Fok, @besanushi , @ecekamar , M. Ribeiro, and @dsweld . When AIs advise people, does an explanation of its reasoning actually help the person? Does it let the human outperform the AI? Does it ...(1/6)
6
43
217
1
2
6
@bansalg_
Gagan Bansal
3 years
@tmiller_unimelb Here's a very recent counter-example/-domain though: For the task of open-domain QA explanations work and help end-users. Led by @AnaValeriaGlez @sriniiyer88 @robinomial @YasharMehdad See:
0
0
6
@bansalg_
Gagan Bansal
1 year
The workshop deadline is approaching soon— Feb 23rd!! #chi2023
@d19fe8
Ken Holstein
1 year
There's still time to submit to the CHI TRAIT workshop on Trust and Reliance in AI-Assisted Tasks! We welcome submissions from both researchers (Research Track) and practitioners (Industry Track). Submissions are due next Thursday, Feb 23 (AoE). CfP:
1
8
19
0
0
5
@bansalg_
Gagan Bansal
3 years
"Heatmaps often only highlight dog faces regardless of whether AI is correct or wrong."
@anh_ng8
Anh (Totti) Nguyen
3 years
In our study of fine-grained dog classification (🦮 / 🐩 / 🐕), human-AI teams where humans use heatmaps performed even worse than the AI alone. Heatmaps often only highlight dog faces regardless of whether AI is correct or wrong.
Tweet media one
1
2
4
0
0
6
@bansalg_
Gagan Bansal
4 years
In search of complementary performance, we conducted new studies where human and AI performance was comparable. While we observed benefits from AI augmentation, they were NOT increased by showing state-of-art explanations. (4/6)
Tweet media one
1
0
6
@bansalg_
Gagan Bansal
3 years
@annargrs @uwnlp @nlpnoah Is there a link to a recording of this talk? Thanks :-)
1
0
5
@bansalg_
Gagan Bansal
2 years
Tweet media one
0
0
5
@bansalg_
Gagan Bansal
2 months
Custom environments play a pivotal role in building useful agents! Checkout this new feature in #AutoGen !
@pyautogen
AutoGen
2 months
AutoGen's code execution capabilities have gotten an upgrade! You can use a Jupyter kernel to maintain a stateful session for code execution. 🤖🌎🔧 Learn more here:
Tweet media one
1
7
36
0
0
5
@bansalg_
Gagan Bansal
4 years
...work better than simply displaying the AI’s confidence? In the quest for augmented intelligence, such questions deserve critical attention! (2/6)
1
0
5
@bansalg_
Gagan Bansal
4 years
While our novel Adaptive explanations showed promise, we must develop explanation algorithms and interfaces that lead to complementary performance, e.g., by enabling appropriate reliance, and providing significant value over simple baselines such as showing AI confidence. (6/6)
1
1
5
@bansalg_
Gagan Bansal
6 months
Massive update to AutoGen! It now supports gpt-4-vision! #autogen #gpt4V
@Chi_Wang_
Chi Wang
6 months
🚀 @pyautogen new release is here with gpt-4-vision-preview multimodal models support! 🛠️ Codebase updates for supporting openai-python v1. 📊 New unstructured data support in RAG & async features for get_human_input. 🔧 Fresh tools & improved docs for devs. #GPT4V #AI #AutoGen
Tweet media one
0
16
51
0
0
5
@bansalg_
Gagan Bansal
3 years
Another instance of how imperfect AI in people-facing interfaces can cause real harm! And why human-centered AI and interfaces are necessary!
@MaartenSap
Maarten Sap (he/him)
3 years
This is why we need to keep scrutinizing the fairness of toxic or inappropriate content filters, and always let users circumvent the automatic systems.
0
3
16
0
0
5
@bansalg_
Gagan Bansal
3 years
@MaartenvSmeden @tongshuangwu @zacharylipton Sadly true. Especially #7 . And this is why user evaluations are crucial to get a realistic picture of current state of XAI. See:
@bansalg_
Gagan Bansal
4 years
New work on explainable AI! w/ @tongshuangwu , J. Zhu, R. Fok, @besanushi , @ecekamar , M. Ribeiro, and @dsweld . When AIs advise people, does an explanation of its reasoning actually help the person? Does it let the human outperform the AI? Does it ...(1/6)
6
43
217
0
0
5
@bansalg_
Gagan Bansal
1 year
Pretty cool feature for reading papers faster! CC: @EricTopol whose highlights on COVID related papers I find very useful.
@SemanticScholar
Semantic Scholar
1 year
🚨 New beta feature live! Do you skim through papers trying to get a glimpse in a minute? By turning on #Skimming in Semantic Reader, you can skim faster with automatically highlighted overlays of the key points. Now available for 9k papers on desktop!
0
13
28
1
1
4
@bansalg_
Gagan Bansal
2 years
...as we suggested in our call to arms paper And as we showed in our studies with open-domain QA, where explanations significantly improved appropriate reliance [2/2]
0
0
4
@bansalg_
Gagan Bansal
4 years
Prior work on XAI only considers the case when AI by itself was more accurate than the human and the human-AI team. Explanations raised team performance closer to AI but if accuracy were the sole objective, removing people would have performed even better in their settings! (3/6)
Tweet media one
1
0
4
@bansalg_
Gagan Bansal
2 months
More opportunities to work with our lab!
@AhmedHAwadallah
Ahmed Awadallah
2 months
We are hiring senior and principal researchers and engineers to work on generative AI technologies including foundation models, small models and learning agent platforms. Applications at: and
2
21
72
0
1
4
@bansalg_
Gagan Bansal
7 months
Checkout this cool coverage of our work on #AutoGen
@VentureBeat
VentureBeat
8 months
Microsoft’s AutoGen framework allows multiple AI agents to talk to each other and complete your tasks
3
9
13
0
0
4
@bansalg_
Gagan Bansal
5 months
We’ll be again at the Microsoft booth tomorrow 2:30-3:00 pm CST!
@bansalg_
Gagan Bansal
5 months
Come learn more about agents and AutoGen at the Microsoft booth at #NeurIPS2023 ! @Chi_Wang_ , @qingyun_wu , and I will be there on Monday 12/11, between 9 am-noon CST and 3:30-4:00 pm CST. Location: Booth 1003 - Next to entrance Hall D cc: @pyautogen , @MSFTResearch
1
4
26
0
0
3
@bansalg_
Gagan Bansal
4 years
We show that approaches maximizing AI accuracy (by using Log-loss) may lead to suboptimal team utility. Instead, we propose and optimize a new loss function based on the team's expected utility. (2/5)
1
0
2
@bansalg_
Gagan Bansal
4 years
cc: Joyce Zhou @cephcyn , my excellent co-author who is finally on Twitter. I misspelled their name in my original tweet--- so sorry, Joyce!
0
0
3
@bansalg_
Gagan Bansal
2 years
@jeffrey_heer @uwcse The course content looks amazing!
0
0
3
@bansalg_
Gagan Bansal
3 years
“5 to 20 fold increase in snares captured!” 😮
@TobyWalsh
Toby Walsh
3 years
Brilliant video asking you to rethink AI, featuring my amazing colleague @MilindTambe_AI and some endangered elephants @thebrilliantHQ via @YouTube
2
9
41
0
0
3
@bansalg_
Gagan Bansal
6 months
Make multiple GPTs collaborate with AutoGen!!
@pyautogen
AutoGen
6 months
We just released a new version of #AutoGen and added compatibility with #OpenAI Assistants! This means you can now make multiple GPTs collaborate to solve complex tasks 🤖🤖🤖 Checkout our new blog post for details:
Tweet media one
6
26
116
0
0
3
@bansalg_
Gagan Bansal
3 years
@leavittron @arimorcos @MLRetrospective Especially agree with "under-utilization of user research for human verification." Though I'd say the focus on human subject studies is increasingly rapidly! You may find our recent work relevant :)
@bansalg_
Gagan Bansal
4 years
New work on explainable AI! w/ @tongshuangwu , J. Zhu, R. Fok, @besanushi , @ecekamar , M. Ribeiro, and @dsweld . When AIs advise people, does an explanation of its reasoning actually help the person? Does it let the human outperform the AI? Does it ...(1/6)
6
43
217
1
0
3
@bansalg_
Gagan Bansal
2 years
What @tmiller_unimelb said! Nice to see more and more researchers and domains ask one of the most important questions in the context of #XAI and #AI in general.
@tmiller_uq
Tim Miller
2 years
If I could like this more than once, I would.
0
1
9
0
0
3
@bansalg_
Gagan Bansal
3 months
Super cool article!
@eaftandilian
Eddie Aftandilian
3 months
Check out our CACM article on measuring the impact of GitHub Copilot on developer productivity! There’s a brief video here as well:
1
7
23
0
0
3
@bansalg_
Gagan Bansal
6 months
🙌
@Chi_Wang_
Chi Wang
6 months
#autogen is mentioned by Satya around 14:00 @pyautogen #AI
3
6
34
0
0
3
@bansalg_
Gagan Bansal
1 year
@kous2v So sorry Koustuv, I can’t even imagine how frustrating and disruptive this must be :(
1
0
2
@bansalg_
Gagan Bansal
4 years
@laura_rieger_de @tongshuangwu @besanushi @ecekamar @dsweld Thank you! We tested w/ non-experts (MTurk), but even w/ experts, deployers should test & ensure explanations don't exacerbate inappropriate reliance or conf. bias. @ihsgnef 's work shows instances where experts are more immune to bad system suggestions:
0
0
2
@bansalg_
Gagan Bansal
7 months
The new CMD + I feature in @github #CoPilot is pretty darn neat! 🤌🤌
0
0
2
@bansalg_
Gagan Bansal
3 years
@AndrewLBeam @MarzyehGhassemi @DrLukeOR Especially agree with "We should advocate for thorough ..validation of these systems..., showing that patient and health-care outcomes are improved" Precisely why we argued for carefully measuring effect of explanations on human-AI team performance:
@bansalg_
Gagan Bansal
4 years
New work on explainable AI! w/ @tongshuangwu , J. Zhu, R. Fok, @besanushi , @ecekamar , M. Ribeiro, and @dsweld . When AIs advise people, does an explanation of its reasoning actually help the person? Does it let the human outperform the AI? Does it ...(1/6)
6
43
217
2
0
2
@bansalg_
Gagan Bansal
3 years
"Anthropic said its work would be focused on 'large-scale AI models', including making the systems more easy to interpret and 'building ways to more tightly integrate human feedback into the development and deployment of these systems'."
@rao2z
Subbarao Kambhampati (కంభంపాటి సుబ్బారావు)
3 years
The cup of open #AI runneth over. There is the used to be open AI, there is the wannabe open AI, and now apparently there is the really like-fer-sure-this-time wannabe real open AI..
1
3
4
0
0
2
@bansalg_
Gagan Bansal
2 months
Among many inspiring feedback we received from this expert, who Ive admired for more than a decade, it was so fascinating to hear how much @pyautogen has spurred bottom-up creativity and advanced our worlds understanding of AI agents! cc: @ekzhu @jack_gerrits
@Chi_Wang_
Chi Wang
2 months
Had a conversation with an iconic leader + my mentor^2 and was told that he was a fan of #AutoGen ! That made my day❤️‍🔥 Super inspired by an insight to the uniqueness of #AutoGen 🦄
2
1
21
0
0
2
@bansalg_
Gagan Bansal
2 years
@AlexKale17 Hopefully there will be another one 😉
0
0
2
@bansalg_
Gagan Bansal
2 years
Pretty cool and impactful result!
@stefanjwojcik
Stefan
2 years
The Birdwatch algorithm surfaces notes to potentially misleading Tweets. Using survey data, we find notes selected by the algorithm reduce the likelihood of agreeing with the substance of a potentially misleading Tweet by about 26%.
1
0
4
0
0
2
@bansalg_
Gagan Bansal
4 years
Team-loss accounts for people adjusting their trust in AI based on the stakes and the cost of human effort. Positive effects can be observed in both synthetic and real datasets and the shift in behavior reflects the encoded human-centered properties. (3/5)
Tweet media one
1
0
2
@bansalg_
Gagan Bansal
2 years
@HsseinMzannar @adamfourney It was so great to have you and your expertise in our collaboration, Hussein!
0
0
2
@bansalg_
Gagan Bansal
10 months
@besanushi @tdietterich @GaryMarcus @xiao_ted @GoogleDeepMind Backwards compatibility for foundational models would be so helpful! But I think end applications can also build in some of this robustness!
1
0
2
@bansalg_
Gagan Bansal
6 months
More full time positions with Frontiers:
@SaleemaAmershi
Saleema Amershi
7 months
📢📢We're hiring!📢📢 If you want to shape the future of AI and empower people and AI agents to collaboratively solve real-world problems, apply here: See also below for more exciting opportunities in #AI at #MSR with our partner teams. 👇👇
7
68
350
0
0
2
@bansalg_
Gagan Bansal
6 months
Haha clever and meta! It's built with AutoGen to help build with AutoGen using discussions from people building with AutoGen! 🤓 🤪
Can't find the answers in the docs? no problem 🤖 I built a multi-agent LLM application to query the collective developer knowledge of the @pyautogen discord server message history. @chainlit_io @OpenAI @trychroma
2
5
17
0
1
2
@bansalg_
Gagan Bansal
6 months
Checkout new blog post on AutoGen and Semantic Kernel by @johnmaeda and Devis Lucato!
@alexchaomander
Alex Chao
6 months
🤝 AutoGen + Semantic Kernel! Devis Lucato shows off how AutoGen can be the basis of a new planner that you can use in your Semantic Kernel applications, unlocking a whole class of interesting scenarios because of conversational multi-agents!
3
15
50
0
0
2