Skip to content
#

llm-reasoning

Here are 37 public repositories matching this topic...

[AAAI 2025] ORQA is a new QA benchmark designed to assess the reasoning capabilities of LLMs in a specialized technical domain of Operations Research. The benchmark evaluates whether LLMs can emulate the knowledge and reasoning skills of OR experts when presented with complex optimization modeling tasks.

  • Updated Jun 7, 2025
  • Python

Improve this page

Add a description, image, and links to the llm-reasoning topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the llm-reasoning topic, visit your repo's landing page and select "manage topics."

Learn more