Compare commits

...

4 Commits

Author SHA1 Message Date
everbarry
50ddcb60ad add model description 2026-04-01 15:35:17 +02:00
everbarry
60ead1ff06 add cam dist 2026-03-25 15:48:13 +01:00
everbarry
8bc7eb9f4b no unicode 2026-03-19 15:37:56 +01:00
everbarry
05d5320455 add bear warning 2026-03-19 14:05:30 +01:00
2 changed files with 187 additions and 10 deletions

123
MODEL_CARD.md Normal file
View File

@ -0,0 +1,123 @@
# Model Card: EfficientNet V2-S — Wild Forest Animals
This document describes the image classifier used by the Wildlife Monitoring Dashboard (0HM340 HumanAI Interaction, TU/e). It follows the spirit of [Model Cards for Model Reporting](https://arxiv.org/abs/1810.03993).
---
## Model summary
| Field | Value |
|--------|--------|
| **Model family** | EfficientNet V2-S (`torchvision.models.efficientnet_v2_s`) |
| **Pre-training** | ImageNet-1K (`EfficientNet_V2_S_Weights.IMAGENET1K_V1`) |
| **Task** | Multi-class image classification (single label per image) |
| **Output** | 7 logits → softmax probabilities over fixed classes |
| **Weights file** | `efficientnet_v2_wild_forest_animals.pt` (state dict only; not shipped in the repo) |
| **Input** | RGB images resized to **224×224**, ImageNet normalization |
| **Framework** | PyTorch |
---
## Intended use
- **Primary:** Educational / research prototype for humanAI interaction and explainability (dashboard with ScoreCAM, LIME, nearest-neighbour views).
- **Deployment context:** Simulated camera-trap workflow in a demo UI; **not** validated for real wildlife management, safety-critical decisions, or law enforcement.
**Out-of-scope uses:** Do not rely on this model for operational conservation decisions, species surveys with legal implications, or any setting where errors could cause harm without independent verification.
---
## Output classes
Fixed label set (order matches the classifier head):
| Index | Class |
|------|--------|
| 0 | bear |
| 1 | deer |
| 2 | fox |
| 3 | hare |
| 4 | moose |
| 5 | person |
| 6 | wolf |
The dashboard narrative may reference a specific national park; **the model was not trained on data from that park only** — see *Training data*.
---
## Training data
- **Source:** Roboflow project `wild-forest-animals-and-person`, workspace `forestanimals`, **version 1**, export format **multiclass**.
- **Local layout:** `wild-forest-animals-and-person-1/` with `train/`, `valid/`, `test/` splits and `_classes.csv` per split (one-hot columns per class).
- **Label handling:** Rows with multiple positive labels in the CSV use the **first** positive class only (single-label training).
- **Domain:** Mixed camera-trap / wild-animal imagery bundled by the dataset authors; distribution across species, geography, lighting, and quality follows that dataset — **not** guaranteed to match any real parks fauna or camera setup.
---
## Training procedure
Implemented in `train.py` (see repository for exact defaults).
| Setting | Default |
|---------|---------|
| Optimizer | Adam |
| Loss | Cross-entropy |
| Mixed precision | Enabled on CUDA (`autocast` + `GradScaler`) |
| Train augmentations | Random horizontal flip (p=0.5), then ToTensor, Resize 224, ImageNet normalize |
| Evaluation augmentations | ToTensor, Resize 224, ImageNet normalize |
| DataLoader shuffle (train) | Yes, with fixed generator seed |
| Reproducibility | `SEED = 42`; CUDNN deterministic mode enabled in training script |
**Default hyperparameters (CLI overridable):** epochs `3`, batch size `32`, learning rate `1e-3`. Example overrides: `--epochs 5`, `--lr 0.0005`, `--batch-size 16`.
**Reported metrics:** The training script prints validation loss/accuracy per epoch and **test** loss/accuracy after the last epoch. Exact numbers depend on run, hardware, and hyperparameters; **record your own metrics** when you train. Weights in the repo are not pinned to a single certified benchmark run.
---
## Evaluation
- **Split:** Held-out `test` folder from the Roboflow export.
- **Metric:** Top-1 accuracy and cross-entropy loss on the test loader (see console output from `train.py`).
- **Limitations:** No per-class confusion matrix or calibration analysis in the default pipeline; no external geographic or temporal holdout.
---
## Ethical and fairness considerations
- **“Person” class:** Predictions can affect privacy perceptions in camera-trap settings; treat as a coarse label, not identity or intent.
- **Wildlife labels:** Errors could misrepresent which species are present; the UI supports **manual verification** — use it when stakes matter.
- **Deployment:** Automated alerts (e.g. wolf/bear warnings in the demo) are **illustrative**; they should not replace expert assessment or park regulations.
---
## Caveats and limitations
1. **Domain shift:** Performance will drop on images that differ strongly from the training distribution (new sensors, night IR, heavy occlusion, rare poses).
2. **Single label:** Images with multiple species only contribute one label during training; the model is not trained for multi-label detection.
3. **Geographic / ecological claims:** Class names refer to species types; **the model does not prove** an animals presence in any specific jurisdiction or ecosystem.
4. **Weights:** If you did not train the checkpoint yourself, treat reported behaviour as **unknown** until you evaluate on your data.
5. **API keys / data download:** Training and dashboard can auto-download data via Roboflow; use your own keys and comply with Roboflow terms in production-like setups.
---
## How to reproduce
```bash
uv sync
uv run python train.py
# Optional: uv run python train.py --epochs 5 --lr 0.0005 --batch-size 16
```
This produces `efficientnet_v2_wild_forest_animals.pt` compatible with `dashboard.py` and `main.py`.
---
## Citation / contact
- **Course / context:** 0HM340 HumanAI Interaction, Eindhoven University of Technology.
- **Base architecture:** Tan & Le, *EfficientNetV2*, 2021 (via torchvision).
- For questions about this card or the codebase, refer to the project `README.md` and `DEVELOPER_GUIDE.md`.
---
*Last updated to match repository layout and training script defaults; update this file if classes, dataset version, or training recipe change.*

View File

@ -2,8 +2,8 @@
Wildlife Monitoring Dashboard Yellowstone National Park Wildlife Monitoring Dashboard Yellowstone National Park
Flask app with two pages: Flask app with two pages:
/ fullscreen map with camera markers + togglable sidebar / - fullscreen map with camera markers + togglable sidebar
/det/<id> detection detail page with all XAI visualisations /det/<id> - detection detail page with all XAI visualisations
Run: uv run python dashboard.py Run: uv run python dashboard.py
""" """
@ -358,7 +358,13 @@ def camera(cam_id):
if cam_id not in CAMERAS: if cam_id not in CAMERAS:
return "Camera not found", 404 return "Camera not found", 404
cam_dets = [d for d in reversed(detections) if d["cam"] == cam_id] cam_dets = [d for d in reversed(detections) if d["cam"] == cam_id]
return render_template_string(CAM_HTML, cam_id=cam_id, cam=CAMERAS[cam_id], dets=cam_dets) return render_template_string(
CAM_HTML,
cam_id=cam_id,
cam=CAMERAS[cam_id],
dets=cam_dets,
class_names=CLASS_NAMES,
)
@app.route("/api/verify/<det_id>", methods=["POST"]) @app.route("/api/verify/<det_id>", methods=["POST"])
@ -698,20 +704,21 @@ function flashCam(cid){
if(mk){mk.classList.add('active');setTimeout(()=>mk.classList.remove('active'),6000);} if(mk){mk.classList.add('active');setTimeout(()=>mk.classList.remove('active'),6000);}
} }
function showWolfWarning(cid){ function showDangerWarning(cid,species){
const c=CAMS[cid]; if(!c)return; const c=CAMS[cid]; if(!c)return;
const wrap=document.getElementById('map-wrap'); const wrap=document.getElementById('map-wrap');
const old=document.getElementById('wolf-'+cid); const key=species+'-'+cid;
const old=document.getElementById('warn-'+key);
if(old)old.remove(); if(old)old.remove();
const oldLbl=document.getElementById('wolf-lbl-'+cid); const oldLbl=document.getElementById('warn-lbl-'+key);
if(oldLbl)oldLbl.remove(); if(oldLbl)oldLbl.remove();
const circle=document.createElement('div'); const circle=document.createElement('div');
circle.className='wolf-warn';circle.id='wolf-'+cid; circle.className='wolf-warn';circle.id='warn-'+key;
circle.style.left=c.px+'%';circle.style.top=c.py+'%'; circle.style.left=c.px+'%';circle.style.top=c.py+'%';
const lbl=document.createElement('div'); const lbl=document.createElement('div');
lbl.className='wolf-warn-label';lbl.id='wolf-lbl-'+cid; lbl.className='wolf-warn-label';lbl.id='warn-lbl-'+key;
lbl.style.left=c.px+'%';lbl.style.top=c.py+'%'; lbl.style.left=c.px+'%';lbl.style.top=c.py+'%';
lbl.textContent='\u26A0 Wolf detected'; lbl.textContent='\u26A0 '+species[0].toUpperCase()+species.slice(1)+' detected';
wrap.appendChild(circle);wrap.appendChild(lbl); wrap.appendChild(circle);wrap.appendChild(lbl);
setTimeout(()=>{circle.classList.add('fade');lbl.classList.add('fade');},10000); setTimeout(()=>{circle.classList.add('fade');lbl.classList.add('fade');},10000);
setTimeout(()=>{circle.remove();lbl.remove();},11500); setTimeout(()=>{circle.remove();lbl.remove();},11500);
@ -726,7 +733,7 @@ async function simulate(){
const d=await r.json(); const d=await r.json();
dets.push(d); dets.push(d);
flashCam(d.cam); flashCam(d.cam);
if(d.pred==='wolf') showWolfWarning(d.cam); if(d.pred==='wolf'||d.pred==='bear') showDangerWarning(d.cam,d.pred);
toast(`<b>${ICONS[d.pred]||''} ${d.pred[0].toUpperCase()+d.pred.slice(1)}</b> detected at ${d.cam} (${d.cam_name}) \u2014 ${d.conf.toFixed(0)}%`); toast(`<b>${ICONS[d.pred]||''} ${d.pred[0].toUpperCase()+d.pred.slice(1)}</b> detected at ${d.cam} (${d.cam_name}) \u2014 ${d.conf.toFixed(0)}%`);
renderList();renderChart();renderHeatmap(); renderList();renderChart();renderHeatmap();
}catch(e){} }catch(e){}
@ -1102,6 +1109,18 @@ body{font-family:system-ui,-apple-system,sans-serif;background:#0f172a;color:#e2
.card-badge.unverified{background:rgba(148,163,184,.15);color:#64748b} .card-badge.unverified{background:rgba(148,163,184,.15);color:#64748b}
.empty{text-align:center;padding:60px 20px;opacity:.4;font-size:14px;line-height:1.6} .empty{text-align:center;padding:60px 20px;opacity:.4;font-size:14px;line-height:1.6}
.cam-chart-panel{
background:rgba(255,255,255,.04);border:1px solid rgba(255,255,255,.06);
border-radius:14px;padding:16px 20px;margin-bottom:24px;
}
.cam-chart-panel h3{
font-size:11px;opacity:.45;margin-bottom:12px;text-transform:uppercase;letter-spacing:.5px;
}
.bar-row{display:flex;align-items:center;margin-bottom:5px;font-size:12px;padding:3px 0;border-radius:5px}
.bar-label{width:58px;text-align:right;padding-right:8px;opacity:.7}
.bar-fill{height:16px;background:#3b82f6;border-radius:3px;transition:width .5s ease;min-width:2px}
.bar-num{padding-left:6px;opacity:.45;font-size:11px}
</style> </style>
</head> </head>
<body> <body>
@ -1115,6 +1134,10 @@ body{font-family:system-ui,-apple-system,sans-serif;background:#0f172a;color:#e2
</div> </div>
<div class="container"> <div class="container">
<div class="cam-chart-panel">
<h3>Detections by Species (this camera)</h3>
<div id="cam-chart-bars"></div>
</div>
{% if dets %} {% if dets %}
<div class="grid"> <div class="grid">
{% for d in dets %} {% for d in dets %}
@ -1142,6 +1165,37 @@ body{font-family:system-ui,-apple-system,sans-serif;background:#0f172a;color:#e2
{% endif %} {% endif %}
</div> </div>
<script>
const CAM_ID={{ cam_id | tojson }};
const CN={{ class_names | tojson }};
const INITIAL={{ dets | tojson }};
function renderCamChart(camDets){
const counts={};
CN.forEach(c=>{counts[c]=0});
camDets.forEach(d=>{
const p=d.pred;
if(Object.prototype.hasOwnProperty.call(counts,p)) counts[p]++;
});
const mx=Math.max(...Object.values(counts),1);
let h='';
for(const sp of CN){
const n=counts[sp];
const pct=(n/mx)*100;
h+=`<div class="bar-row"><div class="bar-label">${sp}</div>
<div class="bar-fill" style="width:${pct}%;${n?'':'opacity:.25'}"></div>
<div class="bar-num">${n}</div></div>`;
}
document.getElementById('cam-chart-bars').innerHTML=h;
}
renderCamChart(INITIAL);
setInterval(async()=>{
try{
const r=await fetch('/api/detections');
const all=await r.json();
renderCamChart(all.filter(d=>d.cam===CAM_ID));
}catch(e){}
},3000);
</script>
</body></html>""" </body></html>"""