One of the new features in the most recent libcamera/rpicam-apps/Picamera2 releases is the inclusion of the neural network-based AWB algorithm that we were working on last summer. This new work is NOT enabled by default so you won't see any differences unless you enable it. So I'll just say a few words on how to do this, and what you might expect.
1. Software Versions
You need at least libcamera version 0.7.0+rpt20260205.
We released this at the same time as rpicam-apps version 1.11.1 and Picamera2 version 0.3.34.
You also need TensorFlow Lite to be installed, but this will have happened automatically if you did a full-upgrade.
2. Enabling Neural Network AWB
Neural network AWB (AWB NN) is enabled through a couple of simple edits in the tuning file for your camera. There is no runtime libcamera control for it. It should be available for all official Raspberry Pi colour cameras, therefore excluding any mono or "no IR" devices. It is supported both on Pi 5 ("PiSP" platform) and older ("vc4" platform) devices, though it does generally work better on the Pi 5.
To enable it, you first need to find the tuning file for your camera. This is normally under /usr/share/libcamera/ipa/rpi/<platform> where <platform> is either "pisp" or "vc4". The tuning file in use may be in /usr/local/share/... instead if you've built a local copy of libcamera. Before you start, I always recommend putting a few garbage characters at the top of the tuning file and checking that the camera doesn't start - just to be sure you've got the right file!
The first step is to find the "enabled" field just below "rpi.awb":
Change the value true to false. Now find the "enabled" field just below "rpi.nn.awb". Here, change false to true. Now save the file and you're done.
You will need root priveleges to edit these files; consider taking a backup first if you're nervous you might destroy them!
3. Expectations
The AWB NN algorithm definitely produces different results to the default Bayesian algorithm, the nature of which depends on the training data. Generally, we find it produces significantly better results, though your mileage may vary. Obviously this is quite early days still, so we're interested in people's experiences.
Remember that it's a neural network, so where your images correspond to "similar" (in some unknowable way) scenes in the training set then we can be reasonably confident of a good result. Where your images are different - and practically all images are different to some extent - then the results are, well, uncertain. You may get something that approximates the most similar images in the training data, but it's not impossible that you'll get something drastically different too.
So what can we do if this happens?
The previous Bayesian algorithm actually has a variety of knobs that can be twiddled to control the behaviour of the algorithm, and to tune it to work better in certain situations. However, no one, not even we at Raspberry Pi, ever did this.
Neural networks are completely unfathomable in their internal workings, but of course we have all the training infrastructure that comes with them. The solution, therefore, is to re-train the network with the problem images. We provide all the tools needed to do this, as well as the datasets we used to train the versions of the models that we ship.
4. Re-training the Networks
The code and datasets required to re-train the models can be found here: https://github.com/raspberrypi/awb_nn
The instructions there should be reasonably comprehensive, so I'll only give the briefest outline here.
Step 1. Use the supplied "snapper" application to capture images.
Step 2. Annotate the images with the correct colour temperature using the "annotator" application.
Step 3. Convert the images to a form that can be used for training. You'll have to do this twice, once for each platform. Converting for the "PiSP" target reduces the images to 32x32, to match the statistics format gathered internally by the hardware. Similarly, converting for the "vc4" target reduces images to 16x12, again to match the hardware.
Step 4. Train the model. You can use purely your own training images, or you can include some of ours as these are provided in the Datasets folder of the repository. These models are in fact tiny so you don't need a fancy GPU, it will work fine on pretty much any CPU. It would even run pretty well on a Pi 5! The training focuses more on reducing the worst case errors and can therefore take some time as the script runs the training over and over trying to "bash" the worst offenders each time. Leaving it running in a loop over a weekend and selecting the best result is recommended.
Step 5. Finally there is a tool to convert the model to TensorFlow Lite format. You can simply replace awb_model.tflite (under /usr/share/libcamera/ipa/rpi/<platform>) with your new file. Again, maybe back up the old one first in case it all goes horribly wrong!
1. Software Versions
You need at least libcamera version 0.7.0+rpt20260205.
We released this at the same time as rpicam-apps version 1.11.1 and Picamera2 version 0.3.34.
You also need TensorFlow Lite to be installed, but this will have happened automatically if you did a full-upgrade.
2. Enabling Neural Network AWB
Neural network AWB (AWB NN) is enabled through a couple of simple edits in the tuning file for your camera. There is no runtime libcamera control for it. It should be available for all official Raspberry Pi colour cameras, therefore excluding any mono or "no IR" devices. It is supported both on Pi 5 ("PiSP" platform) and older ("vc4" platform) devices, though it does generally work better on the Pi 5.
To enable it, you first need to find the tuning file for your camera. This is normally under /usr/share/libcamera/ipa/rpi/<platform> where <platform> is either "pisp" or "vc4". The tuning file in use may be in /usr/local/share/... instead if you've built a local copy of libcamera. Before you start, I always recommend putting a few garbage characters at the top of the tuning file and checking that the camera doesn't start - just to be sure you've got the right file!
The first step is to find the "enabled" field just below "rpi.awb":
Code:
<snip> "rpi.awb": { "enabled": true,<snip>You will need root priveleges to edit these files; consider taking a backup first if you're nervous you might destroy them!
3. Expectations
The AWB NN algorithm definitely produces different results to the default Bayesian algorithm, the nature of which depends on the training data. Generally, we find it produces significantly better results, though your mileage may vary. Obviously this is quite early days still, so we're interested in people's experiences.
Remember that it's a neural network, so where your images correspond to "similar" (in some unknowable way) scenes in the training set then we can be reasonably confident of a good result. Where your images are different - and practically all images are different to some extent - then the results are, well, uncertain. You may get something that approximates the most similar images in the training data, but it's not impossible that you'll get something drastically different too.
So what can we do if this happens?
The previous Bayesian algorithm actually has a variety of knobs that can be twiddled to control the behaviour of the algorithm, and to tune it to work better in certain situations. However, no one, not even we at Raspberry Pi, ever did this.
Neural networks are completely unfathomable in their internal workings, but of course we have all the training infrastructure that comes with them. The solution, therefore, is to re-train the network with the problem images. We provide all the tools needed to do this, as well as the datasets we used to train the versions of the models that we ship.
4. Re-training the Networks
The code and datasets required to re-train the models can be found here: https://github.com/raspberrypi/awb_nn
The instructions there should be reasonably comprehensive, so I'll only give the briefest outline here.
Step 1. Use the supplied "snapper" application to capture images.
Step 2. Annotate the images with the correct colour temperature using the "annotator" application.
Step 3. Convert the images to a form that can be used for training. You'll have to do this twice, once for each platform. Converting for the "PiSP" target reduces the images to 32x32, to match the statistics format gathered internally by the hardware. Similarly, converting for the "vc4" target reduces images to 16x12, again to match the hardware.
Step 4. Train the model. You can use purely your own training images, or you can include some of ours as these are provided in the Datasets folder of the repository. These models are in fact tiny so you don't need a fancy GPU, it will work fine on pretty much any CPU. It would even run pretty well on a Pi 5! The training focuses more on reducing the worst case errors and can therefore take some time as the script runs the training over and over trying to "bash" the worst offenders each time. Leaving it running in a loop over a weekend and selecting the best result is recommended.
Step 5. Finally there is a tool to convert the model to TensorFlow Lite format. You can simply replace awb_model.tflite (under /usr/share/libcamera/ipa/rpi/<platform>) with your new file. Again, maybe back up the old one first in case it all goes horribly wrong!
Statistics: Posted by therealdavidp — Thu Feb 19, 2026 4:03 pm — Replies 0 — Views 64