Compare commits
No commits in common. "master" and "yellowstone" have entirely different histories.
master
...
yellowston
123
MODEL_CARD.md
123
MODEL_CARD.md
@ -1,123 +0,0 @@
|
||||
# Model Card: EfficientNet V2-S — Wild Forest Animals
|
||||
|
||||
This document describes the image classifier used by the Wildlife Monitoring Dashboard (0HM340 Human–AI Interaction, TU/e). It follows the spirit of [Model Cards for Model Reporting](https://arxiv.org/abs/1810.03993).
|
||||
|
||||
---
|
||||
|
||||
## Model summary
|
||||
|
||||
| Field | Value |
|
||||
|--------|--------|
|
||||
| **Model family** | EfficientNet V2-S (`torchvision.models.efficientnet_v2_s`) |
|
||||
| **Pre-training** | ImageNet-1K (`EfficientNet_V2_S_Weights.IMAGENET1K_V1`) |
|
||||
| **Task** | Multi-class image classification (single label per image) |
|
||||
| **Output** | 7 logits → softmax probabilities over fixed classes |
|
||||
| **Weights file** | `efficientnet_v2_wild_forest_animals.pt` (state dict only; not shipped in the repo) |
|
||||
| **Input** | RGB images resized to **224×224**, ImageNet normalization |
|
||||
| **Framework** | PyTorch |
|
||||
|
||||
---
|
||||
|
||||
## Intended use
|
||||
|
||||
- **Primary:** Educational / research prototype for human–AI interaction and explainability (dashboard with ScoreCAM, LIME, nearest-neighbour views).
|
||||
- **Deployment context:** Simulated camera-trap workflow in a demo UI; **not** validated for real wildlife management, safety-critical decisions, or law enforcement.
|
||||
|
||||
**Out-of-scope uses:** Do not rely on this model for operational conservation decisions, species surveys with legal implications, or any setting where errors could cause harm without independent verification.
|
||||
|
||||
---
|
||||
|
||||
## Output classes
|
||||
|
||||
Fixed label set (order matches the classifier head):
|
||||
|
||||
| Index | Class |
|
||||
|------|--------|
|
||||
| 0 | bear |
|
||||
| 1 | deer |
|
||||
| 2 | fox |
|
||||
| 3 | hare |
|
||||
| 4 | moose |
|
||||
| 5 | person |
|
||||
| 6 | wolf |
|
||||
|
||||
The dashboard narrative may reference a specific national park; **the model was not trained on data from that park only** — see *Training data*.
|
||||
|
||||
---
|
||||
|
||||
## Training data
|
||||
|
||||
- **Source:** Roboflow project `wild-forest-animals-and-person`, workspace `forestanimals`, **version 1**, export format **multiclass**.
|
||||
- **Local layout:** `wild-forest-animals-and-person-1/` with `train/`, `valid/`, `test/` splits and `_classes.csv` per split (one-hot columns per class).
|
||||
- **Label handling:** Rows with multiple positive labels in the CSV use the **first** positive class only (single-label training).
|
||||
- **Domain:** Mixed camera-trap / wild-animal imagery bundled by the dataset authors; distribution across species, geography, lighting, and quality follows that dataset — **not** guaranteed to match any real park’s fauna or camera setup.
|
||||
|
||||
---
|
||||
|
||||
## Training procedure
|
||||
|
||||
Implemented in `train.py` (see repository for exact defaults).
|
||||
|
||||
| Setting | Default |
|
||||
|---------|---------|
|
||||
| Optimizer | Adam |
|
||||
| Loss | Cross-entropy |
|
||||
| Mixed precision | Enabled on CUDA (`autocast` + `GradScaler`) |
|
||||
| Train augmentations | Random horizontal flip (p=0.5), then ToTensor, Resize 224, ImageNet normalize |
|
||||
| Evaluation augmentations | ToTensor, Resize 224, ImageNet normalize |
|
||||
| DataLoader shuffle (train) | Yes, with fixed generator seed |
|
||||
| Reproducibility | `SEED = 42`; CUDNN deterministic mode enabled in training script |
|
||||
|
||||
**Default hyperparameters (CLI overridable):** epochs `3`, batch size `32`, learning rate `1e-3`. Example overrides: `--epochs 5`, `--lr 0.0005`, `--batch-size 16`.
|
||||
|
||||
**Reported metrics:** The training script prints validation loss/accuracy per epoch and **test** loss/accuracy after the last epoch. Exact numbers depend on run, hardware, and hyperparameters; **record your own metrics** when you train. Weights in the repo are not pinned to a single certified benchmark run.
|
||||
|
||||
---
|
||||
|
||||
## Evaluation
|
||||
|
||||
- **Split:** Held-out `test` folder from the Roboflow export.
|
||||
- **Metric:** Top-1 accuracy and cross-entropy loss on the test loader (see console output from `train.py`).
|
||||
- **Limitations:** No per-class confusion matrix or calibration analysis in the default pipeline; no external geographic or temporal holdout.
|
||||
|
||||
---
|
||||
|
||||
## Ethical and fairness considerations
|
||||
|
||||
- **“Person” class:** Predictions can affect privacy perceptions in camera-trap settings; treat as a coarse label, not identity or intent.
|
||||
- **Wildlife labels:** Errors could misrepresent which species are present; the UI supports **manual verification** — use it when stakes matter.
|
||||
- **Deployment:** Automated alerts (e.g. wolf/bear warnings in the demo) are **illustrative**; they should not replace expert assessment or park regulations.
|
||||
|
||||
---
|
||||
|
||||
## Caveats and limitations
|
||||
|
||||
1. **Domain shift:** Performance will drop on images that differ strongly from the training distribution (new sensors, night IR, heavy occlusion, rare poses).
|
||||
2. **Single label:** Images with multiple species only contribute one label during training; the model is not trained for multi-label detection.
|
||||
3. **Geographic / ecological claims:** Class names refer to species types; **the model does not prove** an animal’s presence in any specific jurisdiction or ecosystem.
|
||||
4. **Weights:** If you did not train the checkpoint yourself, treat reported behaviour as **unknown** until you evaluate on your data.
|
||||
5. **API keys / data download:** Training and dashboard can auto-download data via Roboflow; use your own keys and comply with Roboflow terms in production-like setups.
|
||||
|
||||
---
|
||||
|
||||
## How to reproduce
|
||||
|
||||
```bash
|
||||
uv sync
|
||||
uv run python train.py
|
||||
# Optional: uv run python train.py --epochs 5 --lr 0.0005 --batch-size 16
|
||||
```
|
||||
|
||||
This produces `efficientnet_v2_wild_forest_animals.pt` compatible with `dashboard.py` and `main.py`.
|
||||
|
||||
---
|
||||
|
||||
## Citation / contact
|
||||
|
||||
- **Course / context:** 0HM340 Human–AI Interaction, Eindhoven University of Technology.
|
||||
- **Base architecture:** Tan & Le, *EfficientNetV2*, 2021 (via torchvision).
|
||||
- For questions about this card or the codebase, refer to the project `README.md` and `DEVELOPER_GUIDE.md`.
|
||||
|
||||
---
|
||||
|
||||
*Last updated to match repository layout and training script defaults; update this file if classes, dataset version, or training recipe change.*
|
||||
74
dashboard.py
74
dashboard.py
@ -2,8 +2,8 @@
|
||||
Wildlife Monitoring Dashboard — Yellowstone National Park
|
||||
|
||||
Flask app with two pages:
|
||||
/ - fullscreen map with camera markers + togglable sidebar
|
||||
/det/<id> - detection detail page with all XAI visualisations
|
||||
/ – fullscreen map with camera markers + togglable sidebar
|
||||
/det/<id> – detection detail page with all XAI visualisations
|
||||
|
||||
Run: uv run python dashboard.py
|
||||
"""
|
||||
@ -358,13 +358,7 @@ def camera(cam_id):
|
||||
if cam_id not in CAMERAS:
|
||||
return "Camera not found", 404
|
||||
cam_dets = [d for d in reversed(detections) if d["cam"] == cam_id]
|
||||
return render_template_string(
|
||||
CAM_HTML,
|
||||
cam_id=cam_id,
|
||||
cam=CAMERAS[cam_id],
|
||||
dets=cam_dets,
|
||||
class_names=CLASS_NAMES,
|
||||
)
|
||||
return render_template_string(CAM_HTML, cam_id=cam_id, cam=CAMERAS[cam_id], dets=cam_dets)
|
||||
|
||||
|
||||
@app.route("/api/verify/<det_id>", methods=["POST"])
|
||||
@ -704,21 +698,20 @@ function flashCam(cid){
|
||||
if(mk){mk.classList.add('active');setTimeout(()=>mk.classList.remove('active'),6000);}
|
||||
}
|
||||
|
||||
function showDangerWarning(cid,species){
|
||||
function showWolfWarning(cid){
|
||||
const c=CAMS[cid]; if(!c)return;
|
||||
const wrap=document.getElementById('map-wrap');
|
||||
const key=species+'-'+cid;
|
||||
const old=document.getElementById('warn-'+key);
|
||||
const old=document.getElementById('wolf-'+cid);
|
||||
if(old)old.remove();
|
||||
const oldLbl=document.getElementById('warn-lbl-'+key);
|
||||
const oldLbl=document.getElementById('wolf-lbl-'+cid);
|
||||
if(oldLbl)oldLbl.remove();
|
||||
const circle=document.createElement('div');
|
||||
circle.className='wolf-warn';circle.id='warn-'+key;
|
||||
circle.className='wolf-warn';circle.id='wolf-'+cid;
|
||||
circle.style.left=c.px+'%';circle.style.top=c.py+'%';
|
||||
const lbl=document.createElement('div');
|
||||
lbl.className='wolf-warn-label';lbl.id='warn-lbl-'+key;
|
||||
lbl.className='wolf-warn-label';lbl.id='wolf-lbl-'+cid;
|
||||
lbl.style.left=c.px+'%';lbl.style.top=c.py+'%';
|
||||
lbl.textContent='\u26A0 '+species[0].toUpperCase()+species.slice(1)+' detected';
|
||||
lbl.textContent='\u26A0 Wolf detected';
|
||||
wrap.appendChild(circle);wrap.appendChild(lbl);
|
||||
setTimeout(()=>{circle.classList.add('fade');lbl.classList.add('fade');},10000);
|
||||
setTimeout(()=>{circle.remove();lbl.remove();},11500);
|
||||
@ -733,7 +726,7 @@ async function simulate(){
|
||||
const d=await r.json();
|
||||
dets.push(d);
|
||||
flashCam(d.cam);
|
||||
if(d.pred==='wolf'||d.pred==='bear') showDangerWarning(d.cam,d.pred);
|
||||
if(d.pred==='wolf') showWolfWarning(d.cam);
|
||||
toast(`<b>${ICONS[d.pred]||''} ${d.pred[0].toUpperCase()+d.pred.slice(1)}</b> detected at ${d.cam} (${d.cam_name}) \u2014 ${d.conf.toFixed(0)}%`);
|
||||
renderList();renderChart();renderHeatmap();
|
||||
}catch(e){}
|
||||
@ -1109,18 +1102,6 @@ body{font-family:system-ui,-apple-system,sans-serif;background:#0f172a;color:#e2
|
||||
.card-badge.unverified{background:rgba(148,163,184,.15);color:#64748b}
|
||||
|
||||
.empty{text-align:center;padding:60px 20px;opacity:.4;font-size:14px;line-height:1.6}
|
||||
|
||||
.cam-chart-panel{
|
||||
background:rgba(255,255,255,.04);border:1px solid rgba(255,255,255,.06);
|
||||
border-radius:14px;padding:16px 20px;margin-bottom:24px;
|
||||
}
|
||||
.cam-chart-panel h3{
|
||||
font-size:11px;opacity:.45;margin-bottom:12px;text-transform:uppercase;letter-spacing:.5px;
|
||||
}
|
||||
.bar-row{display:flex;align-items:center;margin-bottom:5px;font-size:12px;padding:3px 0;border-radius:5px}
|
||||
.bar-label{width:58px;text-align:right;padding-right:8px;opacity:.7}
|
||||
.bar-fill{height:16px;background:#3b82f6;border-radius:3px;transition:width .5s ease;min-width:2px}
|
||||
.bar-num{padding-left:6px;opacity:.45;font-size:11px}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
@ -1134,10 +1115,6 @@ body{font-family:system-ui,-apple-system,sans-serif;background:#0f172a;color:#e2
|
||||
</div>
|
||||
|
||||
<div class="container">
|
||||
<div class="cam-chart-panel">
|
||||
<h3>Detections by Species (this camera)</h3>
|
||||
<div id="cam-chart-bars"></div>
|
||||
</div>
|
||||
{% if dets %}
|
||||
<div class="grid">
|
||||
{% for d in dets %}
|
||||
@ -1165,37 +1142,6 @@ body{font-family:system-ui,-apple-system,sans-serif;background:#0f172a;color:#e2
|
||||
{% endif %}
|
||||
</div>
|
||||
|
||||
<script>
|
||||
const CAM_ID={{ cam_id | tojson }};
|
||||
const CN={{ class_names | tojson }};
|
||||
const INITIAL={{ dets | tojson }};
|
||||
function renderCamChart(camDets){
|
||||
const counts={};
|
||||
CN.forEach(c=>{counts[c]=0});
|
||||
camDets.forEach(d=>{
|
||||
const p=d.pred;
|
||||
if(Object.prototype.hasOwnProperty.call(counts,p)) counts[p]++;
|
||||
});
|
||||
const mx=Math.max(...Object.values(counts),1);
|
||||
let h='';
|
||||
for(const sp of CN){
|
||||
const n=counts[sp];
|
||||
const pct=(n/mx)*100;
|
||||
h+=`<div class="bar-row"><div class="bar-label">${sp}</div>
|
||||
<div class="bar-fill" style="width:${pct}%;${n?'':'opacity:.25'}"></div>
|
||||
<div class="bar-num">${n}</div></div>`;
|
||||
}
|
||||
document.getElementById('cam-chart-bars').innerHTML=h;
|
||||
}
|
||||
renderCamChart(INITIAL);
|
||||
setInterval(async()=>{
|
||||
try{
|
||||
const r=await fetch('/api/detections');
|
||||
const all=await r.json();
|
||||
renderCamChart(all.filter(d=>d.cam===CAM_ID));
|
||||
}catch(e){}
|
||||
},3000);
|
||||
</script>
|
||||
</body></html>"""
|
||||
|
||||
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user