Introducing Advanced Techniques for Enhancing Large Language Models (LLMs)

In the rapidly evolving field of artificial intelligence, mastering Large Language Models (LLMs) requires a blend of optimization, graph technology, and Retrieval-Augmented Generation (RAG) techniques. This talk introduces key strategies for maximizing LLM performance, integrating graph databases, and advancing RAG methodologies. We’ll delve into topics such as optimization flows and prompt engineering while also examining the synergy between LLMs and graph technology through knowledge graphs and vector searches. Additionally, we’ll showcase advanced approaches in RAG, including multi-modal and ensemble retrievers, highlighting practical applications and potential pitfalls. This session is designed for AI practitioners seeking to harness the full potential of LLMs in their projects.

Quick Info
Conference
Event Type
Venue
Is Topic
Yes
Timeslots
-
Content
Language
Level
Target Audience
Developer, Power User, General User
Speaker

Dr. Chung Ng

Dr. Chung is a SVP at Group CTO Office of the HKT/PCCW Group, where he’s responsible for leading the group’s product and technology roadmap
and strategic development. He also represents the group as board member of Lynx Analytics.

Before HKT/PCCW, Chung contributed to the Big Data/AI strategy at Telstra as well as its international growth strategy. Prior to Telstra, Chung was an Associate Partner of Cluster Technology Limited which serves the Greater China market with professional services and solutions in high-performance computing, machine learning, big data, and public cloud.

In 2008, Chung joined McKinsey & Company in the Hong Kong office. He received his DPhil in Information Engineering from the University of Oxford and held the Croucher Foundation Scholarship to work toward his research degree in wireless ad-hoc networks. Chung also received BEng and MPhil in Information Engineering from the Chinese University of Hong Kong.

Country / Region
Hong Kong
Affiliations
HKT Limited
Is Remote Presentation
false