Listen to this story
oneAPI is an open, cross-industry, standards-based, unified, multi-architecture, multi-vendor programming model that delivers a common developer experience across accelerator architectures – for faster application performance, more productivity, and greater innovation. The oneAPI initiative encourages collaboration on the oneAPI specification and compatible oneAPI implementations across the ecosystem.
The competition boasted a slick line of prizes, including the iPhone 14, iPad Air, and Samsung Galaxy Watch, among others, drawing registrations from the global AI and machine learning community.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
The participants delved into Intel’s oneAPI, a standards-based programming model designed for versatile use across multiple architectures, such as CPU, GPU, FPGA, and other accelerators, ensuring accelerated computations without the constraints of vendor lock-in, as highlighted by a winning participant.
The challenge posed to contestants was to devise a model capable of generating responses in alignment with the provided ‘Answer’ for each question. The outcomes from participating candidates went through two phases, which included documentation of their hackathon journey in insightful blogs.
The task of singling out the top three contenders fell to the jury, who meticulously studied the outcomes. To get insights from their processes, AIM spoke to the champions of the LLM Challenge, who recounted their experiences with MachineHack and shed light on the approaches that led to their outcomes.
Securing the first rank was Ramashish Gupta, a fourth-year undergraduate student from IIT Kharagpur.
Reflecting on the initial phase of the process, Ramashish blogged, “This dataset defies the conventions of a typical extractive question-answering dataset, where answers are readily found verbatim within the context. A comprehensive analysis revealed that a substantial 35% of the answers eluded direct contextual extraction.”
Ramashish further emphasised the inadequacy of traditional encoder models designed to pinpoint the start and end indices of answer text for such a distinctive dataset. He proposed the necessity of a generative question-answering model, employing an encoder-decoder architecture.
Furthermore, Ramashish cautioned against the complication of training separate models for yes-no and true-false questions.
The model crafted under Ramashish’s expertise not only secured the lead position on the leaderboard with an impressive score of 0.376 but also exceeded these numerical accomplishments in terms of actual capabilities, as highlighted by the ambitious student.
For an in-depth exploration of Ramashish’s journey through the LLM Challenge, read the complete blog here.
Check out the solution here.
Runner-up Jatin Yadav’s journey with MachineHack began a few months ago. However, his tryst with data engineering began during a college session, where he first encountered the concepts that would later become his expertise. Jatin’s commitment to expanding his knowledge is evident in the courses he pursued in data science.
The LLM Challenge judge’s commentary on his achievement highlighted the excellence in his work. The judge particularly lauded Jatin for his articulate explanations of optimizations, the reusability of his GitHub repository, and the performance of his model on the designated task.
The highlight of the hackathon, as per Jatin, was the opportunity to utilize Intel’s latest graphic processors at no cost – a benefit that would have incurred significant expenses if employed on alternative cloud platforms.
Check out the solution here.
Abhinaba Bala, a research scholar at the International Institute of Information Technology, Hyderabad, secured the third position in the LLM Challenge.
Bala is an accomplished NLP researcher dedicated to the development of datasets and tools tailored for low-resource languages, showcasing expertise in the multi-modal domain and news article enrichment.
With a robust background in 3D computer vision, he thrives on collaborative endeavours and actively seeks out opportunities to contribute to interdisciplinary projects.
For this hackathon, he used SimpleT5, a Python library designed to simplify T5 models, which he chose as the foundation.
Check out the solution here.
Concluding on a Successful Note
Beyond the top three winners, the competition witnessed talent across the board. It is noteworthy to highlight the performances of the runner-ups: Ashwin Kanth, Pratik Davidson Deogam, and Padmakumar. Their contributions added depth to the hackathon, showcasing diverse approaches and solutions.
Read Ashwin Kanth’s blog here.
Read Pratik Davidson Deogam’s blog here.
Read Padmakumar’s blog here.
The hackathon also boasted a distinguished panel of judges, featuring Kavita Aroor, Developer Marketing Lead for the Asia-Pacific and Japan region at Intel, Anish Kumar, AI Software Solutions Engineering Manager for the Asia-Pacific region at Intel®, and Vishnu Madhu, an AI Software Solutions Engineer at Intel. Their collective judgement brought a wealth of expertise and insight, adding an extra layer of prestige to the event.
The ‘oneAPI Hackathon: The LLM Challenge’ marks a pivotal moment in the large language models landscape, which is currently dominating the tech world.
The hackathon not only showcased advancements in LLMs but also the evolving dynamics between tech powerhouses and AI developers. The event marked a leap forward in pushing the boundaries of what LLMs can achieve when built on the oneAPI framework.