About Me

I am Head of AI at CONXAI Technologies, in Munich, Germany. I lead a team of ML Engineers developing AI solutions for the Architecture, Engineering and Construction (AEC) industry.
I did my Postdoctoral Research with Michael Black at the Max Planck Institute for Intelligent Systems in Tübingen, Germany. I received my PhD in Computer Science at Georgia Tech, advised by Devi Parikh.

I have also been fortunate to spend summers at Toyota Technological Institute in Chicago (TTIC), Facebook AI Research (FAIR), Curai Health and Indiana University's Dept. of Psychological and Brain Sciences.

Arjun Chandrasekaran

Head of AI
CONXAI
Munich, Germany.

Email: arjun.chandrasekaran@conxai.com
Google Scholar
LinkedIn
Curriculum Vitae

Past Research

I am interested in multi-modal machine-learning problems in computer vision and natural language processing.

I primarily work on scene understanding, specifically, identifying and localising the objects and the people in the scene.

The goal of my current work is to create value along multiple dimensions, for our customers in the Construction (AEC) industry.

In the past, I worked on human action understanding in 3D, computational models for embodied human interactions, and specific aspects of human interactions such as humor and narrative. I also worked on understanding human-AI interactions with the goal of creating better human-AI teams.

My work and interests span the areas of computer vision, natural language processing, machine learning, crowdsourcing and cognitive science.



  Publications


How much coffee was consumed during EMNLP 2019? Fermi Problems: A New Reasoning Challenge for AI
A. Kalyan, A. Kumar, A. Chandrasekaran, A. Sabharwal, P. Clark
Conference on Empirical Methods in Natural Language Processing (EMNLP), 2021.
Oral presentation
[Project Page]     [Data]     [Code]


Active Domain Adaptation via Clustering Uncertainty-weighted Embeddings
V. Prabhu, A. Chandrasekaran,  K. Saenko, J. Hoffman
International Conference on Computer Vision (ICCV), 2021
[Project webpage]    [Video]    [Code]   


BABEL: Bodies, Action and Behavior with English Labels
A. Punnakkal*, A. Chandrasekaran*,  N. Athanasiou, A. Quiros-Ramirez, Black, M. J. (*Equal contribution)
Conference on Computer Vision and Pattern Recognition (CVPR), 2021
[Video]     [Dataset]     [Interface demos]     [Code]    


  

A computational model of early word learning from the infant's point of view
S. Tsutsui, A. Chandrasekaran, M. Reza, D. Crandall, C. Yu
Annual Conference of the Cognitive Science Society (CogSci), 2020 (Talk)
[Code]


Do explanation modalities make VQA models more predictable to a human?
A. Chandrasekaran*, V. Prabhu*, D. Yadav*, P. Chattopadhyay*, D. Parikh (*Equal contribution)
Conference on Empirical Methods in Natural Language Processing (EMNLP), 2018.


Punny Captions: Witty Wordplay in Image Descriptions
A. Chandrasekaran, D. Parikh, M. Bansal
North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL HLT), 2018.



Evaluating Visual Conversational Agents via Cooperative Human-AI Games
P. Chattopadhyay*, D. Yadav*, V. Prabhu, A. Chandrasekaran, A. Das, S. Lee, D. Batra, D. Parikh (*Equal contribution)
AAAI Conference on Human Computation and Crowdsourcing (HCOMP), 2017.
[Code]


Sort Story: Sorting Jumbled Images and Captions into Stories
H. Agrawal*, A. Chandrasekaran*, D. Batra, D. Parikh, M. Bansal (*Equal contribution)
Conference on Empirical Methods in Natural Language Processing (EMNLP), 2016.
[Podcast Interview]


We are Humor Beings: Understanding and Predicting Visual Humor
A. Chandrasekaran, A. Kalyan, S. Antol, M. Bansal, D. Batra, C. L. Zitnick, D. Parikh
Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
Spotlight presentation
[Download data]    [Browse funny scenes]    [Funny-unfunny pairs]    [Spotlight talk]
Media Coverage: MIT Technology Review, Newsweek, Virginia Tech ECE News.



  Preprints