Skip to content

Commit 82fb02f

Browse files
submission guidelines
1 parent d2b66f9 commit 82fb02f

File tree

1 file changed

+130
-0
lines changed

1 file changed

+130
-0
lines changed

SUBMISSION_GUIDELINES.md

Lines changed: 130 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,130 @@
1+
2+
# DrivAerNet++ Submission Guidelines
3+
4+
Thank you for your interest in contributing to the DrivAerNet++ leaderboard! This document outlines the process and requirements for submitting your model's results.
5+
6+
## Submission Process
7+
8+
1. Fork the [DrivAerNet repository](https://github.com/Mohamedelrefaie/DrivAerNet)
9+
2. Evaluate your model using the official train/validation/test splits
10+
3. Create a new branch for your submission
11+
4. Add your results and required files
12+
5. Submit a pull request
13+
14+
## Required Files
15+
16+
Your submission should include:
17+
18+
1. `model_description.md (optional):
19+
- Model architecture details
20+
- Implementation specifics
21+
- Training configuration and hyperparameters
22+
- Link to paper (if applicable)
23+
- Link to trained model weights or inference code
24+
25+
2. `test_results.txt`:
26+
- Complete evaluation metrics on the test set
27+
- Inference time statistics
28+
29+
## Evaluation Metrics
30+
31+
### 1. Drag Coefficient Prediction
32+
33+
For drag coefficient prediction, the following metrics must be reported:
34+
35+
```python
36+
# Required metrics:
37+
- Mean Squared Error (MSE)
38+
- Mean Absolute Error (MAE)
39+
- Maximum Absolute Error (Max AE)
40+
- R² Score
41+
- Total inference time and samples processed
42+
```
43+
44+
Example test output format:
45+
```
46+
Test MSE: 0.000123
47+
Test MAE: 0.008976
48+
Max MAE: 0.034567
49+
Test R²: 0.9876
50+
Total inference time: 12.34s for 1200 samples
51+
```
52+
53+
### 2. Surface Field and Volumetric Field Prediction
54+
55+
For surface pressure field and volumetric field predictions, the following metrics must be reported:
56+
57+
```python
58+
# Required metrics:
59+
- Mean Squared Error (MSE)
60+
- Mean Absolute Error (MAE)
61+
- Maximum Absolute Error (Max AE)
62+
- Relative L1 Error (%) = mean(|prediction - target|_1 / |target|_1)
63+
- Relative L2 Error (%) = mean(|prediction - target|_2 / |target|_2)
64+
- Total inference time and samples processed
65+
```
66+
67+
Example test output format:
68+
```
69+
Test MSE: 0.000456
70+
Test MAE: 0.012345
71+
Max AE: 0.078901
72+
Relative L2 Error: 2.345678
73+
Relative L1 Error: 1.987654
74+
Total inference time: 45.67s for 1200 samples
75+
```
76+
77+
## Code Requirements
78+
79+
### Test Function Implementation
80+
81+
Your evaluation code should follow this structure:
82+
83+
```python
84+
def test_model(model, test_dataloader, config):
85+
"""
86+
Test the model using the provided test DataLoader and calculate metrics.
87+
88+
Args:
89+
model: The trained model to be tested
90+
test_dataloader: DataLoader for the test set
91+
config: Configuration dictionary containing model settings
92+
"""
93+
model.eval()
94+
with torch.no_grad():
95+
# Implement metric calculations as shown in the example code
96+
# For drag coefficient:
97+
mse = F.mse_loss(outputs, targets)
98+
mae = F.l1_loss(outputs, targets)
99+
r2 = r2_score(all_preds, all_targets)
100+
101+
# For field predictions:
102+
# Calculate relative errors
103+
rel_l2 = torch.mean(
104+
torch.norm(outputs - targets, p=2, dim=-1) /
105+
torch.norm(targets, p=2, dim=-1)
106+
)
107+
rel_l1 = torch.mean(
108+
torch.norm(outputs - targets, p=1, dim=-1) /
109+
torch.norm(targets, p=1, dim=-1)
110+
)
111+
```
112+
113+
## Submission Checklist
114+
115+
Before submitting your pull request, ensure:
116+
117+
- [ ] All required metrics are calculated and reported
118+
- [ ] Results are obtained using the official data splits
119+
- [ ] Model description is complete and clear
120+
- [ ] Code follows the provided format for metric calculation
121+
- [ ] All results are reproducible
122+
123+
## Review Process
124+
125+
1. Your submission will be reviewed for completeness
126+
2. Results will be verified for correctness
127+
3. Upon approval, your results will be added to the leaderboard
128+
129+
For questions or clarifications, please contact:
130+
Mohamed Elrefaie (email: mohamed.elrefaie [at] mit [dot] edu)

0 commit comments

Comments
 (0)