In this article, we explore statistical methods for evaluating LLM performance, an essential step to guarantee stability and effectiveness.
Author Archive | Cornellius Yudha Wijaya

10 Python One-Liners That Will Boost Your Data Preparation Workflow
This article will explore how a simple one-liner Python code can boost your data preparation workflow.
Prompt Engineering Patterns for Successful RAG Implementations
This article will explore various prompt engineering methods to improve the RAG result.
Implementing Multi-Modal RAG Systems
In this article, we will implement multi-modal RAG using text, audio, and image data.
Building Your First Multi-Agent System: A Beginner’s Guide
Learn how to build an advanced collaborative automation system.
The 2025 Machine Learning Toolbox: Top Libraries and Tools for Practitioners
This article will explore the top machine learning libraries and tools for practitioners in 2025.
3 Easy Ways to Fine-Tune Language Models
Language models have quickly become cornerstones of many business applications in recent years. Their usefulness has been proven by many people who interact with them daily. As language models continue to find their place in people’s lives, the community has made many breakthroughs to improve models’ capabilities, primarily through fine-tuning. Language model fine-tuning is a […]
Future-Proof Your Machine Learning Career in 2025
Machine learning continues to provide benefits of all sorts that have become integrated within society, meaning that a career in machine learning will only become more important with time. A career in machine learning is something many people strive for; however, it’s not an easy journey to start. Beyond this, even once you have begun […]
RAG Hallucination Detection Techniques
Introduction Large language models (LLMs) are useful for many applications, including question answering, translation, summarization, and much more, with recent advancements in the area having increased their potential. As you are undoubtedly aware, there are times when LLMs provide factually incorrect answers, especially when the response desired for a given input prompt is not represented […]
7 Next-Generation Prompt Engineering Techniques
With large language model (LLM) products such as ChatGPT and Gemini taking over the world, we need to adjust our skills to follow the trend. One skill we need in the modern era is prompt engineering. Prompt engineering is the strategy of designing effective prompts that optimize the performance and output of LLMs. By structuring […]