Hi, I’m Adit Jain.
Ph.D. candidate at Cornell ECE. I study reinforcement learning, distributed optimization, and networks of LLMs.
I am a Ph.D. candidate at the School of Electrical and Computer Engineering at Cornell University, where I am advised by Prof. Vikram Krishnamurthy. I am expected to graduate in May 2026 and am looking for full time positions in industry.
I work on problems related to reinforcement learning (structured stochastic optimal control & bandits), distributed optimization and game theory.
I like studying the applications of these problems in the context of large language models, especially improving their training and their networking.
This summer, I interned at the Machine Learning Research Group at Morgan Stanley, where I developed a mixture of tokens generation method to improve verifiable reinforcement learning in large language models.
In the summer of 2024, I interned at Adobe Research, where I worked on developing high dimensional sparse bandits algorithms for efficient data annotation, and proved regret bounds for the same. I applied this algorithm to improve the efficiency of supervised fine tuning of large language models.
Here's the link to my 2 page resume. I did my Bachelor's at IIT Guwahati with a major in Electronics and Communication Engineering and a minor in Computer Science. I love to read all kinds of books , learn new topics and skills, discuss and debate, and play racquet sports like Badminton & Squash.
I work on problems related to reinforcement learning (structured stochastic optimal control & bandits), distributed optimization and game theory.
I like studying the applications of these problems in the context of large language models, especially improving their training and their networking.
This summer, I interned at the Machine Learning Research Group at Morgan Stanley, where I developed a mixture of tokens generation method to improve verifiable reinforcement learning in large language models.
In the summer of 2024, I interned at Adobe Research, where I worked on developing high dimensional sparse bandits algorithms for efficient data annotation, and proved regret bounds for the same. I applied this algorithm to improve the efficiency of supervised fine tuning of large language models.
Here's the link to my 2 page resume. I did my Bachelor's at IIT Guwahati with a major in Electronics and Communication Engineering and a minor in Computer Science. I love to read all kinds of books , learn new topics and skills, discuss and debate, and play racquet sports like Badminton & Squash.

Publications
-
arXiv preprint Networks of LLMsInformation Diffusion and Preferential Attachment in a Network of Large Language Models
-
arXiv preprint BanditsBlocked Sparse Linear Bandits
-
IEEE Access 2025 Networks of LLMsInteracting Large Language Model Agents. Bayesian Social Learning Based Interpretable Models
-
CDC 2024 Networks of LLMsIdentifying Hate Speech Peddlers in Online Platforms. A Bayesian Social Learning Approach for LLM-Driven Decision-Makers
-
NACO 2025 Controlling Federated LearningControlling stochastic gradient descent using stochastic approximation for robust distributed optimization
-
IEEE Control System Letters 2024 Controlling Federated LearningStructured Reinforcement Learning for Incentivized Stochastic Covert Optimization
-
TMLR 2024 Controlling Federated LearningControlling Federated Learning for Covertness
-
Asilomar 2024 BanditsBimodal Bandits: Max-Mean Regret Minimization
-
IEEE JSTSP 2024Interpretable Deep Image Classification using Rationally Inattentive Utility Maximization
-
IEEE WCNC 2024Joint Antenna Selection and Beamforming for an IRS Aided IoT System
-
NCC 2022Low complexity passive beamforming algorithms for intelligent reflecting surfaces with discrete phase-shifts over OFDM systems