Listen to this story
|
A team of Microsoft researchers, in collaboration with researchers from ZJU-UIUC Institute and the National University of Singapore, recently introduced AllHands, a comprehensive analytic framework designed to handle large-scale verbatim feedback using a natural language interface powered by large language models (LLMs).
AllHands provides software developers with a user-friendly solution for extracting valuable insights from extensive verbatim feedback. The framework follows a conventional feedback analytic workflow, first classifying the feedback and modelling topics to convert the data into a structured format. LLMs are integrated here to improve accuracy and generalisation.
An LLM agent then translates user questions about the feedback into Python code, executes it, and provides multi-modal responses, including text, code, tables, and images.
The researchers evaluated AllHands on three diverse feedback datasets. In each stage, the framework outperformed baselines at each stage, from classification and topic modelling to providing comprehensive, correct answers to user queries. The framework handled a wide range of common feedback-related questions and could be extended with custom plugins for more complex analyses.
The authors mention that existing solutions for feedback classification and topic modelling have limitations, such as requiring substantial human-labelled data, lacking generalisation, and struggling with challenges like polysemy and multilingual scenarios.
The paper also noted that while various tools have been developed to support specific feedback analysis objectives, a flexible and unified framework cannot accommodate a wide array of analyses. AllHands aims to bridge this gap by leveraging the capabilities of LLMs. The authors present AllHands as a new approach to address the limitations of existing methods such as the reliance on supervised machine learning models.