2.9 KiB
Wildlife Monitoring Dashboard — Nationaal Park De Hoge Veluwe
A real-time wildlife camera-trap monitoring dashboard powered by an EfficientNet V2-S classifier and multiple Explainable AI (XAI) methods. Built for the 0HM340 Human-AI Interaction assignment.
The dashboard simulates a live feed of camera-trap detections across five locations in Nationaal Park De Hoge Veluwe, classifying images into seven species (bear, deer, fox, hare, moose, person, wolf) and providing interactive explanations of each prediction.
Prerequisites
- Python 3.13+
- uv package manager
- CUDA GPU (optional, falls back to CPU)
Quick Start
1. Clone the repository
git clone https://git.barrys.cloud/barry/Wildlife-Detection.git
cd Wildlife-Detection
2. Obtain the model weights
The trained model weights are not included in the repository due to their size. You can regenerate them by running the training script:
uv run python train.py
This downloads the dataset from Roboflow (if not present), fine-tunes EfficientNet V2-S for 3 epochs, and saves efficientnet_v2_wild_forest_animals.pt. Optional flags:
uv run python train.py --epochs 5 --lr 0.0005 --batch-size 16
The dataset (wild-forest-animals-and-person-1/) is downloaded automatically from Roboflow on first run if not already present on disk.
3. Install dependencies
uv sync
4. Run the dashboard
uv run python dashboard.py
The server starts at http://localhost:5000.
Project Structure
assignment-HAI/
├── dashboard.py # Flask application (main entry point)
├── train.py # Standalone training script
├── map.webp # Park map background image
├── pyproject.toml # Project metadata and dependencies
├── uv.lock # Locked dependency versions
├── .python-version # Python version pin (3.13)
├── .gitignore
├── README.md # This file
├── USER_GUIDE.md # End-user documentation
├── DEVELOPER_GUIDE.md # Developer documentation
├── efficientnet_v2_wild_forest_animals.pt # Model weights (not in git)
└── wild-forest-animals-and-person-1/ # Dataset (not in git)
├── train/
├── test/
└── valid/
XAI Methods
| Method | What it shows |
|---|---|
| ScoreCAM | Gradient-free saliency heatmap highlighting regions the model attends to |
| LIME | Superpixel-based explanations for the top-2 predicted classes |
| Contrastive LIME | Highlights which regions distinguish the top-1 prediction from the top-2 |
| Nearest Neighbours | Three training images most similar in feature space (cosine similarity) |
License
Academic project — 0HM340 Human-AI Interaction, TU/e.