Koog is an innovative, open-source agentic framework built by JetBrains. It empowers Kotlin developers to create and run AI agents entirely within the JVM ecosystem, leveraging a modern Kotlin DSL. This means you can build intelligent, autonomous agents with the same ease and productivity that Kotlin brings to everyday development.
The Benefits of Koog for Your AI Agentic Projects
Koog offers a compelling set of features and advantages that make it an excellent choice for anyone looking to dive into AI agent development with Kotlin:
Pure Kotlin Implementation: Build and run your AI agents entirely in idiomatic Kotlin. This means leveraging all the benefits of Kotlin – conciseness, null safety, and excellent tooling – for your AI projects.
Modular Feature System: Extend your agent’s capabilities through a highly composable feature system. This allows for flexible and scalable agent design.
Tool Integration: Koog allows you to create and integrate custom tools, giving your agents access to external systems and resources. This is crucial for agents that need to interact with the real world or specific APIs.
Powerful Streaming API: Process responses from Large Language Models (LLMs) in real-time. This is essential for responsive user interfaces and efficient handling of large outputs. It even supports invoking multiple tools on the fly from a single LLM request.
Intelligent History Compression: Optimize token usage while maintaining conversation context through various pre-built strategies. This helps manage costs and improves efficiency when dealing with long conversations.
Persistent Agent Memory: Enable knowledge retention across different sessions and even between different agents, leading to more robust and capable AI.
Comprehensive Tracing: Debug and monitor agent execution with detailed and configurable tracing of LLM calls, tools, and agent stages. This provides invaluable insight into your agent’s behavior.
Support for Various LLM Providers: Koog integrates with popular LLM providers like Google, OpenAI, Anthropic, OpenRouter, and Ollama, giving you flexibility in choosing your underlying AI models.
My Experience with Koog
As someone who is currently working on an AI agentic project and, honestly, without previous AI code experience, I can confidently say that Koog (version 0.2.1) is super good for it. The framework’s design is incredibly intuitive, making it easy to grasp the core concepts of building AI agents. The clear documentation and the idiomatic Kotlin approach meant that I could quickly get started and see tangible results. The ability to integrate tools and design complex workflows without getting bogged down in low-level AI complexities has been a game-changer for my project.
Conclusion
Koog is truly a game-changer for Kotlin developers venturing into the exciting field of AI agents. Its pure Kotlin implementation, comprehensive features, and developer-friendly design make it an exceptionally powerful and enjoyable framework to work with. It’s clear that JetBrains has put a lot of thought into making AI agent development accessible and efficient. Even for someone like me, who previously lacked extensive AI coding experience, Koog has proven to be incredibly easy to work with and an excellent foundation for building sophisticated AI agentic projects. If you’re a Kotlin developer looking to build AI agents, I highly recommend giving Koog a try – you won’t be disappointed!
Android development just got a significant upgrade with the introduction of Gemini Journeys in Android Studio. This innovative AI-powered feature promises to transform how we approach end-to-end testing by leveraging natural language prompts instead of traditional manual test creation.
What is Gemini Journeys?
Gemini Journeys represents a paradigm shift in mobile testing methodology. Instead of writing complex test scripts line by line, developers can now describe their testing intentions in plain English, and Gemini AI translates these prompts into comprehensive end-to-end tests.
The feature integrates seamlessly with Android Studio’s preview environment, offering developers an intuitive way to:
Generate automated UI tests through conversational prompts
Create comprehensive test scenarios without deep testing framework knowledge
Accelerate the testing workflow significantly
Reduce the barrier to entry for comprehensive mobile testing
Hands-On Experience: Building with KoinBase
To explore Gemini Journeys’ capabilities, I created a demo project called KoinBase - a simple cryptocurrency tracking application built with Jetpack Compose. The app showcases modern Android development practices while serving as a perfect testing ground for AI-assisted test generation.
Key Features of the Demo:
Clean Architecture: Implementing MVVM pattern with proper separation of concerns
Jetpack Compose UI: Modern declarative UI framework
Dependency Injection: Using Koin for lightweight DI
Network Integration: RESTful API consumption for crypto data
Material 3 Design: Following latest design guidelines
First Impressions: A Game Changer
After experimenting with Gemini Journeys on the KoinBase project, here are my initial thoughts:
The Good:
Intuitive Workflow: Describing test scenarios in natural language feels remarkably natural
Productivity Boost: Test creation time reduced significantly compared to manual approaches
Intelligent Context: Gemini understands app structure and suggests relevant test scenarios
Quality Output: Generated tests are comprehensive and well-structured
The Promise:
This technology represents a fundamental shift toward more accessible and efficient mobile testing. For teams struggling with testing coverage or developers new to automated testing, Gemini Journeys could be transformational.
Looking Forward
Gemini Journeys appears to be more than just another AI tool - it’s positioning itself as a genuine game changer for mobile testing workflows. The ability to generate robust E2E tests through conversational prompts could democratize comprehensive testing practices across development teams of all skill levels.
As AI continues to integrate deeper into development workflows, features like Gemini Journeys demonstrate how machine learning can augment human creativity rather than replace it. The future of Android development looks increasingly collaborative between human insight and artificial intelligence capabilities.
Try It Yourself
Interested in exploring Gemini Journeys? Check out the official documentation and consider experimenting with your own projects. The KoinBase demo is also available as a reference implementation.
The intersection of AI and mobile development continues to evolve rapidly, and Gemini Journeys represents an exciting step toward more intelligent, efficient development practices.
Building Android apps today is a lot about managing “state.” Think of state as all the information that makes your app tick: the text a user typed, whether a button is enabled, a list of items to display. As your app grows, managing this state can get tricky, making your code messy and hard to maintain.
Thankfully, Jetpack Compose, Android’s modern UI toolkit, offers some elegant patterns to keep your state under control. Let’s break down some of the key ideas, making them easier to understand than a complex technical paper.
The Core Idea: State Hoisting
Imagine you have a Checkbox in your app. It has two states: checked or unchecked. If the Checkbox manages its own state, it’s called “internal state.” But what if another part of your app needs to know if it’s checked?
This is where State Hoisting comes in. Instead of the Checkbox holding its own “checked” status, we “hoist” that status up to a parent component. The Checkbox then becomes a “dumb” component. It just shows what it’s told to show and tells its parent when it’s clicked.
Think of it like a child asking a parent for permission. The child (our Checkbox) doesn’t decide if it can have a cookie (change its state). It asks the parent (the higher-level component), and the parent makes the decision and tells the child what to do.
In Compose, this often looks like:
@Composable
fun MyFancyCheckbox(
isChecked: Boolean, // The state is passed in
onCheckedChange: (Boolean) -> Unit // An event is passed out
) {
Checkbox(
checked = isChecked,
onCheckedChange = onCheckedChange // The parent handles the actual state update
)
}
@Composable
fun ParentScreen() {
var checkedState by rememberSaveable { mutableStateOf(false) } // Parent manages the state
MyFancyCheckbox(
isChecked = checkedState,
onCheckedChange = { newCheckedState -> checkedState = newCheckedState }
)
}
This makes MyFancyCheckbox reusable and testable because it doesn’t care how its state is managed, only what its state is and when it’s interacted with.
State Holders: Your State Organizers
As your app gets more complex, you’ll have more and more state. Just having a bunch of vars in your @Composable function can get unwieldy. This is where State Holders come in handy.
A State Holder is essentially a plain old Kotlin class that holds and manages a piece of your UI’s state. It centralizes all the logic related to that state.
Imagine a user profile screen. It might have the user’s name, email, and a “save” button. Instead of managing all these bits of information directly in your ProfileScreen Composable, you could have a ProfileScreenStateHolder (or ViewModel if it’s lifecycle-aware).
// A simple example of a State Holder
class MyLoginScreenStateHolder {
var username by mutableStateOf("")
var password by mutableStateOf("")
fun onUsernameChanged(newUsername: String) {
username = newUsername
}
fun onPasswordChanged(newPassword: String) {
password = newPassword
}
fun login() {
// Perform login logic using username and password
println("Attempting to log in with username: $username")
}
}
@Composable
fun LoginScreen(stateHolder: MyLoginScreenStateHolder = remember { MyLoginScreenStateHolder() }) {
Column {
TextField(
value = stateHolder.username,
onValueChange = stateHolder::onUsernameChanged,
label = { Text("Username") }
)
TextField(
value = stateHolder.password,
onValueChange = stateHolder::onPasswordChanged,
label = { Text("Password") }
)
Button(onClick = stateHolder::login) {
Text("Login")
}
}
}
This separates the UI (LoginScreen) from the logic and state management (MyLoginScreenStateHolder), making your code cleaner and easier to understand.
ViewModels: The Android-Aware State Holders
When your State Holder needs to survive configuration changes (like rotating your phone) or interact with data from your app’s deeper layers (like a database or network), you often use a ViewModel.
A ViewModel is a special kind of State Holder provided by Android Architecture Components. It’s designed to hold UI-related data in a way that survives app lifecycle events. It’s often where you’ll find your network calls, database operations, and other business logic that feeds into your UI.
Think of it as the brain of your screen or feature. It fetches data, processes it, and then exposes that data to your Composables.
When to Choose What?
State Hoisting: For simple UI elements where the parent needs to control the state. It makes components reusable and less coupled.
Simple State Holders (Plain Kotlin classes): When you have a group of related UI state that needs to be managed together within a single Composable, and it doesn’t need to survive lifecycle changes or interact with deeper app layers.
ViewModels: For complex screens or features where you need to manage state that survives configuration changes, interacts with data sources (like network or database), or requires more complex business logic. They are typically used for a whole screen or a significant portion of it.
The Benefits of Good State Management
By applying these patterns, you gain:
Cleaner Code: Your UI code focuses solely on how things look, not what data they hold or how that data changes.
Easier Testing: You can test your State Holders and ViewModels independently of your UI.
Better Reusability: Components become generic and can be used in different parts of your app.
Improved Maintainability: When something breaks, it’s easier to pinpoint where the issue lies.
Understanding and applying these state management patterns in Jetpack Compose will significantly improve the quality and maintainability of your Android applications. It’s a fundamental concept that will serve you well as you build more complex and robust experiences.
Understanding when LaunchedEffect, DisposableEffect, and composables run in Jetpack Compose can be tricky. Let’s simplify with a few real-world analogies.
🎭 Composables = Stage Actors
Composables are like actors:
They enter when the screen appears.
They update their lines when state changes (recomposition).
They exit when removed from the UI.
🕯️ LaunchedEffect = Candle in a Room
You light a candle when entering a room → LaunchedEffect runs.
If the room changes (key changes), you blow it out and light a new one.
If you leave, the candle is blown out.
Use it for one-time effects or state collection.
🧹 DisposableEffect = Hotel Housekeeping
Housekeeper sets up the room → DisposableEffect runs.
When you check out (or key changes), the room is cleaned → onDispose is called.
Perfect for listeners or subscriptions that need cleanup.
🔄 Recomposition = Changing Actor’s Lines
If the script (state) changes, actors stay on stage but adjust their lines. No need to re-run effects unless keys change.
Quick Comparison
Concept
Analogy
When It Runs
When It Cleans Up
Composable
Actor
On screen draw/state
On removal
LaunchedEffect
Candle
On enter/key change
On key change/removal
DisposableEffect
Housekeeping
On enter/key change
On key change/removal
✅ Final Tip
So next time you add a LaunchedEffect or a DisposableEffect, ask yourself:
Is this a one-time action? → Use LaunchedEffect.
Does it need cleanup? → Use DisposableEffect.
Thinking this way makes Compose easier and your code cleaner.
The world of extended reality (XR) is expanding rapidly, merging physical and digital realms to create immersive experiences. Android XR offers a versatile platform for developers to build applications that blend augmented reality (AR) and virtual reality (VR) into everyday life. In this post, we’ll explore the essentials of Android XR and provide you with a starting point to dive into this exciting technology.
What is Android XR?
XR (Extended Reality) encompasses all immersive technologies—AR, VR, and mixed reality (MR). Android XR integrates these experiences seamlessly into Android devices, allowing developers to create cutting-edge applications that:
Overlay digital objects on the real world (AR).
Fully immerse users in virtual environments (VR).
Combine real and virtual objects that interact in real-time (MR).
Android’s XR ecosystem is built on frameworks like ARCore and leverages powerful hardware capabilities available in modern devices.
Key Components of Android XR
1. ARCore
ARCore is Android’s primary SDK for building AR applications. It provides tools to:
Track motion in 3D space.
Understand environmental features like flat surfaces.
Estimate lighting conditions for realistic AR rendering.
2. XR Interaction Tools
Android XR provides APIs and libraries to simplify interactions, such as detecting gestures or recognizing physical objects. Developers can use Unity or Unreal Engine to create rich 3D experiences or integrate ARCore directly into Android apps for custom solutions.
3. Cross-Platform Development
Android XR supports frameworks like OpenXR, making it easier to build applications that work across multiple devices, from smartphones to head-mounted displays (HMDs).
Getting Started with Android XR Development
1. Set Up Your Development Environment
Start by installing Android Studio and configuring it for XR development:
Install the latest version of Android Studio.
Add the ARCore dependency to your project.
Use a physical device with ARCore support for testing.
Try creating a simple AR app that displays a 3D object on a flat surface. ARCore’s Plane Detection API can help you get started quickly.
My Android XR Demo Project
To help you jumpstart your journey, I’ve created a simple demo app showcasing basic XR features using ARCore and Jetpack Compose. This project serves as a practical example to learn XR development fundamentals.