-
Updated
Sep 9, 2021 - Python
#
evaluation
Here are 639 public repositories matching this topic...
nlp
machine-learning
natural-language-processing
computer-vision
metrics
tensorflow
numpy
evaluation
pandas
pytorch
datasets
-
Updated
May 8, 2021
Building a modern functional compiler from first principles. (http://dev.stephendiehl.com/fun/)
compiler
functional-programming
book
lambda-calculus
evaluation
type-theory
type
pdf-book
type-checking
haskel
type-system
functional-language
hindley-milner
type-inference
intermediate-representation
-
Updated
Jan 11, 2021 - Haskell
Klipse is a JavaScript plugin for embedding interactive code snippets in tech blogs.
react
javascript
ruby
python
scheme
clojure
lua
clojurescript
reactjs
common-lisp
ocaml
brainfuck
evaluation
prolog
codemirror-editor
reasonml
interactive-snippets
code-evaluation
klipse-plugin
-
Updated
Sep 2, 2021 - HTML
End-to-end Automatic Speech Recognition for Madarian and English in Tensorflow
audio
deep-learning
tensorflow
paper
end-to-end
evaluation
cnn
lstm
speech-recognition
rnn
automatic-speech-recognition
feature-vector
data-preprocessing
phonemes
timit-dataset
layer-normalization
rnn-encoder-decoder
chinese-speech-recognition
-
Updated
Aug 25, 2021 - Python
Multi-class confusion matrix library in Python
data-science
data
machine-learning
data-mining
statistics
ai
deep-learning
neural-network
matrix
evaluation
mathematics
ml
artificial-intelligence
statistical-analysis
classification
accuracy
data-analysis
deeplearning
confusion-matrix
multiclass-classification
-
Updated
Sep 6, 2021 - Python
(IROS 2020, ECCVW 2020) Official Python Implementation for "3D Multi-Object Tracking: A Baseline and New Evaluation Metrics"
tracking
machine-learning
real-time
computer-vision
robotics
evaluation
evaluation-metrics
multi-object-tracking
kitti
3d-tracking
3d-multi-object-tracking
2d-mot-evaluation
3d-mot
3d-multi
kitti-3d
-
Updated
May 19, 2021 - Python
Evaluation code for various unsupervised automated metrics for Natural Language Generation.
nlp
natural-language-processing
meteor
machine-translation
dialogue
evaluation
dialog
rouge
natural-language-generation
nlg
cider
rouge-l
skip-thoughts
skip-thought-vectors
bleu-score
bleu
task-oriented-dialogue
-
Updated
May 6, 2021 - Python
Short and sweet LISP editing
-
Updated
Sep 6, 2021 - Emacs Lisp
XAI - An eXplainability toolbox for machine learning
machine-learning
ai
evaluation
ml
artificial-intelligence
upsampling
bias
interpretability
feature-importance
explainable-ai
explainable-ml
xai
imbalance
downsampling
explainability
bias-evaluation
machine-learning-explainability
xai-library
-
Updated
Jul 20, 2021 - Python
FuzzBench - Fuzzer benchmarking as a service.
-
Updated
Sep 9, 2021 - Python
Python implementation of the IOU Tracker
tracker
python
detection
evaluation
demo-script
mot
detrac
iou-tracker
detrac-train
eb-detections
ua-detrac
tracking-by-detection
-
Updated
Feb 18, 2020 - Python
vlomonaco
commented
Feb 23, 2021
I noticed it is quite tricky at the moment to generate a benchmark with an unbalanced number of examples for each step.
It would be nice to have an option in ni_scenario, nc_scenario and similar to set the number of examples for each step.
TCExam is a CBA (Computer-Based Assessment) system (e-exam, CBT - Computer Based Testing) for universities, schools and companies, that enables educators and trainers to author, schedule, deliver, and report on surveys, quizzes, tests and exams.
testing
school
university
evaluation
exam
cba
essay
computer-based-assessment
cbt
multiple-choice
mcsa
computer-based-testing
e-exam
tcexam
mcma
-
Updated
Aug 5, 2021 - PHP
A General Toolbox for Identifying Object Detection Errors
-
Updated
Jun 22, 2021 - Python
Expression evaluation in golang
go
golang
parser
parsing
evaluation
godoc
expression-evaluator
expression-language
evaluate-expressions
gval
-
Updated
Jun 3, 2021 - Go
Visual Object Tracking (VOT) challenge evaluation toolkit
-
Updated
Apr 19, 2021 - MATLAB
Case Recommender: A Flexible and Extensible Python Framework for Recommender Systems
python
algorithm
feedback
evaluation
batch
ranking
recommendation-system
top-k
recommender-systems
forte
rating-prediction
-
Updated
Jun 17, 2021 - Python
SemanticKITTI API for visualizing dataset, processing data, and evaluating results.
machine-learning
deep-learning
evaluation
labels
dataset
semantic-segmentation
semantic-scene-completion
large-scale-dataset
-
Updated
Sep 7, 2021 - Python
A collection of datasets that pair questions with SQL queries.
nlp
natural-language-processing
sql
database
neural-network
evaluation
dataset
dynet
natural-language-interface
-
Updated
Dec 29, 2020 - Python
C# Eval Expression | Evaluate, Compile, and Execute C# code and expression at runtime.
-
Updated
Sep 9, 2021 - C#
An extensive evaluation and comparison of 28 state-of-the-art superpixel algorithms on 5 datasets.
-
Updated
Jul 31, 2021 - C++
High-fidelity performance metrics for generative models in PyTorch
reproducible-research
metrics
evaluation
pytorch
gan
generative-model
reproducibility
precision
inception-score
frechet-inception-distance
kernel-inception-distance
perceptual-path-length
-
Updated
Sep 2, 2021 - Python
Simple Safe Sandboxed Extensible Expression Evaluator for Python
-
Updated
Sep 9, 2021 - Python
A Simple Math and Pseudo C# Expression Evaluator in One C# File. Can also execute small C# like scripts
parser
reflection
math
script
scripting
evaluation
fluid
mathematical-expressions-evaluator
expression
calculations
evaluator
mathematical-expressions
execute
expression-parser
eval
expression-evaluator
csharp-script
evaluate-expressions
evaluate
executescript
-
Updated
Aug 23, 2021 - C#
ERRor ANnotation Toolkit: Automatically extract and classify grammatical errors in parallel original and corrected sentences.
-
Updated
Aug 24, 2021 - Python
Improve this page
Add a description, image, and links to the evaluation topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the evaluation topic, visit your repo's landing page and select "manage topics."
Description
Currently, when a challenge link from EvalAI is shared users see a generic view of EvalAI homepage. We want the details specific to a challenge to be shown when a link is shared. Here's how it looks currently
Expected behavior:
T