~30 minutes Beginner Android ARCore Jetpack Compose

Your First AR App
with SceneView

Build an AR app where you tap the floor to place a 3D model in the real world. The model persists, can be moved by tapping again, and scales / rotates with gestures. In ~50 lines of Jetpack Compose.

  • AR camera view with plane detection
  • Tap-to-place a 3D model on any flat surface
  • Pinch-to-scale and two-finger-rotate gestures
  • Visual plane indicator (animated reticle)
SceneView demo โ€” 3D model viewer
SceneView demo โ€” Showcase
Prerequisites: Android Studio (latest stable), basic Kotlin knowledge, a physical Android device with ARCore support. Check supported devices โ†’
1

Create a new project

In Android Studio, create a new Empty Activity project:

  • Language: Kotlin
  • Minimum SDK: API 24 (Android 7.0)
  • Build configuration: Kotlin DSL

Make sure Jetpack Compose is enabled (it is by default in Empty Activity).

2

Add the SceneView dependency

In app/build.gradle.kts, add the AR dependency:

// app/build.gradle.kts
dependencies {
    implementation("io.github.sceneview:arsceneview:3.2.0")
}

In app/src/main/AndroidManifest.xml, add AR permissions and metadata:

<!-- AndroidManifest.xml -->
<manifest>
    <uses-permission android:name="android.permission.CAMERA" />
    <uses-feature android:name="android.hardware.camera.ar"
                 android:required="true" />

    <application>
        <!-- Required: tells Play Store this app requires ARCore -->
        <meta-data android:name="com.google.ar.core"
                   android:value="required" />
        ...
    </application>
</manifest>
Use arsceneview for AR apps. For 3D-only (no camera), use sceneview instead.
3

Add your 3D model

Download a free .glb model from one of these sources:

Place the file at:

app/src/main/assets/models/object.glb
Keep models under 5 MB for fast AR loading. Use gltf-transform to compress and optimize.
4

Write the AR screen

Create a new file ARPlacementScreen.kt in your package:

package com.example.myarapp

import androidx.compose.foundation.layout.*
import androidx.compose.material3.Text
import androidx.compose.runtime.*
import androidx.compose.ui.Alignment
import androidx.compose.ui.Modifier
import androidx.compose.ui.graphics.Color
import androidx.compose.ui.unit.dp
import androidx.compose.ui.unit.sp
import io.github.sceneview.ar.ARScene
import io.github.sceneview.ar.node.AnchorNode
import io.github.sceneview.node.ModelNode
import io.github.sceneview.rememberEngine
import io.github.sceneview.rememberModelInstance
import io.github.sceneview.rememberModelLoader

@Composable
fun ARPlacementScreen() {
    val engine = rememberEngine()
    val modelLoader = rememberModelLoader(engine)

    // The anchor node where the model is placed
    var anchorNode by remember { mutableStateOf<AnchorNode?>(null) }

    // Load the model (returns null while loading)
    val modelInstance = rememberModelInstance(modelLoader, "models/object.glb")

    // Hint text โ€” disappears once the model is placed
    val hint = if (anchorNode == null)
        "Move your phone to detect surfaces, then tap to place"
    else ""

    Box(modifier = Modifier.fillMaxSize()) {

        ARScene(
            modifier = Modifier.fillMaxSize(),
            engine = engine,
            modelLoader = modelLoader,
            planeRenderer = true,         // Animated plane grid
            onSingleTapConfirmed = { hitResult ->
                // Destroy previous anchor (replace on tap)
                anchorNode?.destroy()

                // Create a new anchor at the tapped position
                anchorNode = AnchorNode(
                    engine = engine,
                    anchor = hitResult.createAnchor()
                ).apply {
                    isEditable = true  // pinch-scale + two-finger-rotate
                }
            }
        ) {
            // Place the model at the anchor when both are ready
            anchorNode?.let { anchor ->
                modelInstance?.let { instance ->
                    ModelNode(
                        modelInstance = instance,
                        scaleToUnits = 0.5f,      // Scale to 50cm
                        autoAnimate = true,
                        animationLoop = true
                    )
                }
            }
        }

        // Hint text overlay
        if (hint.isNotEmpty()) {
            Text(
                text = hint,
                color = Color.White,
                fontSize = 14.sp,
                modifier = Modifier
                    .align(Alignment.BottomCenter)
                    .padding(bottom = 48.dp)
            )
        }
    }
}
5

Set it as the main screen

In MainActivity.kt, set ARPlacementScreen as the content:

class MainActivity : ComponentActivity() {
    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContent {
            MaterialTheme {
                ARPlacementScreen()
            }
        }
    }
}
6

Run the app

Connect a physical Android device โ€” AR requires a real camera, emulators don't work.

  1. Enable Developer Options and USB Debugging on your device
  2. Click Run โ–ถ in Android Studio
  3. Point your phone at a floor or table
  4. Move slowly to detect the surface โ€” an animated grid appears
  5. Tap anywhere on the grid to place the model
  6. Pinch to scale ยท two-finger drag to rotate
AR model placed on floor
If ARCore isn't installed on the device, the app will automatically prompt the user to install it from the Play Store.
?

How it works

ARScene { } composable

The ARScene composable handles everything automatically:

  • Camera permission request
  • ARCore session initialization
  • Surface tracking and plane detection
  • The camera feed as background

You provide the onSingleTapConfirmed callback and the content block. ARCore does the rest.

AnchorNode

An AnchorNode locks a coordinate to a real-world position. As ARCore refines its understanding of the scene, the anchor updates automatically โ€” keeping your model locked to the surface even as you move.

Setting isEditable = true enables:

  • Pinch โ†’ scale the model
  • Two-finger drag โ†’ rotate around the Y axis

rememberModelInstance

rememberModelInstance loads the GLB file asynchronously on the main thread (required by Filament). It returns null while loading. The ?.let { } null check means nothing renders until loading is complete โ€” no crash, no flash.

Filament threading rule: never call model loading from a background coroutine. The remember* helpers handle this correctly โ€” always use them in composables.
7

Add a "Remove" button

Let users delete the placed model. Add this inside your Box, below the ARScene:

if (anchorNode != null) {
    Button(
        onClick = {
            anchorNode?.destroy()
            anchorNode = null
        },
        modifier = Modifier
            .align(Alignment.TopEnd)
            .padding(16.dp)
    ) {
        Text("Remove")
    }
}

The button only appears after a model is placed โ€” the if (anchorNode != null) check drives the UI reactively from state, just like any other Compose conditional.

8

Model switching (optional)

Let users choose from multiple models with FilterChip selectors:

val models = listOf(
    "Chair" to "models/chair.glb",
    "Car" to "models/car.glb"
)
var selectedModel by remember { mutableStateOf("models/chair.glb") }
val modelInstance = rememberModelInstance(modelLoader, selectedModel)

// In your Box, add a chip row at the bottom:
Row(
    modifier = Modifier
        .align(Alignment.BottomCenter)
        .padding(bottom = 32.dp)
) {
    models.forEach { (label, path) ->
        FilterChip(
            selected = selectedModel == path,
            onClick = { selectedModel = path },
            label = { Text(label) }
        )
    }
}

When selectedModel changes, rememberModelInstance reloads automatically. The old model is cleaned up. Zero manual lifecycle management.

โœ“

Summary

You built a tap-to-place AR app in ~50 lines of Compose. Here's what you used:

Component Purpose
ARScene { }AR camera + plane detection, all managed automatically
AnchorNodeLock 3D content to a real-world surface
ModelNodeRender the GLB model at the anchor position
rememberModelInstanceAsync model loading with automatic cleanup
isEditable = truePinch-to-scale + two-finger-rotate gestures

Next steps