MMAction2 is an open-source toolbox for video understanding based on PyTorch.
MMAction2 is another awesome project by OpenMMLab. If you don't know them yet, they have a series of PyTorch based repositories that implement multiple state-of-the-art algorithms for different vision related tasks. Some projects include:
MMAction2: video action understanding.
MMClassification: image classification.
MMDetection: image detection.
MMDetection3D: general 3D object detection.
MMSegmentation: semantic segmentation.
MMOCR: text detection, recognition, and understanding.
MMPose: pose estimation.
This post provides a step by step guide on how to enable plots visualization on MMAction2.
Enabling MMAction2 Plots Visualization Tools
One of the reasons OpenMMLab projects are so useful is that they provide a feature rich model development environment. It provides distributed training, testing, demoing, dataset analysis and visualization utilities.
The following sections show how to enable the training learning visualization utilities. For this example, I'll be using the SlowFast network, which performs spatio-temporal action detection.
1. Identify your config file. In our case, we'll use the smallest configuration.
configs/detection/slowfast/slowfast_kinetics400-pretrained-r50_8xb8-8x8x1-20e_ava21-rgb.py
2. Append the following to the end of your config file:
vis_backends = [
dict(type='LocalVisBackend'),
dict(type='TensorboardVisBackend'),
dict(type='WandbVisBackend')
]
visualizer = dict(type='ActionVisualizer', vis_backends=vis_backends)
By default, only the LocalVisBackend is enabled. This writes all the information to the file system in the form of log files. I've enabled two additional plot visualization backends: TensorBoard and Weights and Biases (Wandb).
Of course, you don't need both. Choose the one you like and remove the other one. Just be careful and make sure you keep valid Python syntax.
Visualizing Training using TensorBoard
If you enabled the TensorBoard visualization backend, then follow the instructions above to visualize the plots:
1. Install TensorBoard
pip3 install tensorboard
2. Start the training process
python3 tools/train.py configs/detection/slowfast/slowfast_kinetics400-pretrained-r50_8xb8-8x8x1-20e_ava21-rgb.py
3. Identify your current run. Unless you changed it, it will be under
./work_dirs/<config_file>/<date_time>/vis_data
In our example, it will be:
./work_dirs/slowfast_kinetics400-pretrained-r50_8xb8-8x8x1-20e_ava21-rgb/20231111_220155/vis_data
4. Start TensorBoard and identify the port it is serving (typically 6006).
tensorboard --bind_all --logdir work_dirs/slowfast_kinetics400-pretrained-r50_8xb8-8x8x1-20e_ava21-rgb/20231111_220155/vis_data
5. Finally, open a browser and navigate to:
http://IP.ADDRESS.OF.SYSTEM:6006/
You'll see something like the following:
Visualizing Training using Weights and Biases
If you enabled the TensorBoard visualization backend, then follow the instructions above to visualize the plots:
1. Install W&B
pip3 install wandb
2. Start the training process
python3 tools/train.py configs/detection/slowfast/slowfast_kinetics400-pretrained-r50_8xb8-8x8x1-20e_ava21-rgb.py
3. Follow the instructions in the console, when prompted. This will only happen the first time you attempt to run wandb.
wandb: (1) Create a W&B account
wandb: (2) Use an existing W&B account
wandb: (3) Don't visualize my results
wandb: Enter your choice:
Either create an account or use your existing one.
4. Finally, open a browser and navigate to:
https://wandb.ai
You'll find the runs under the mmaction2-tools projects. It'll look like:
Additional Information
As you can imagine, all this infrastructure is huge. In most of the cases, the information I provided should be enough. If you need more control, the following links should help:
Comments