-
Updated
Feb 21, 2020 - Jupyter Notebook
open-science
Here are 324 public repositories matching this topic...
Datasets can be listed with lots of filters, but strangely not by uploader user id.
PR #76 removed the Digital Rocks Portal data repository because their SSL certificate is messed up. We should add it back eventually when the site is back up. Here is the entry:
- [Digital Rocks Portal](https://www.digitalrocksportal.org/) – Powerful data portal for images of varied porous micro-structures
I’m realizing that not all of the API is covered in the documentation. Is there a way we can run sphinx and report the coverage for what is autodoc’d in the docs?
It would be ideal to have this at 100%
Instructions should be included for using docker with access to cvmfs and conddb (as pointed out by the CMS Open Data users at MIT).
-
Updated
May 14, 2020 - PHP
-
Updated
Apr 28, 2020 - Python
Created containers in dockerpool don't have a name that we can use when calling docker ps
. This should be change to help the usage of repairnator.
We also use a deprecated method for the volumes, we should check and change that.
-
Updated
May 5, 2020 - GLSL
Feature request: allow passing custom made Matplotlib colormaps
import matplotlib.pyplot as plt
cmap = plt.cm.get_cmap("viridis", 5)
then pass that colormap to the Viewer
(the goal isn't necessarily catecorical colormaps... this is just a simple example. A user might want to make a custom normalized map that are far more complex)
DOC: Broken link
The link http://math.lanl.gov/~mac/papers/numerics/HS99B.pdf for Hyman and Shashkov, 1999 is broken in https://docs.simpeg.xyz/content/api_core/api_FiniteVolume.html
An idea might be to give the full reference instead of a link.
Currently, the raw data importer always assumes that the data is Fortran ordering, and the user has to manually transpose the data to Fortran ordering if it is actually C ordering instead.
We should add an option to the raw data importer so that the user can specify the ordering of the raw data file. If it is Fortran, we will do nothing, but if it is C ordering, we will convert it to Fortran or
- Also add corresponding unittest that runs an example of each
This might help address issues such as in #3159
In order to make manual testing easier, we should set up Docker Compose. This makes running the complete infrastructure needed to run OpenSNP as simple as running docker-compose build
and docker-compose run
.
His website is probably the best comp neuro resource outside of NIF. Should communicate with him and add everything he's found over the years to the list.
While Avo2 currently supports writing to POV-Ray and VRML formats, it would be great to support writing to the more common PLY format:
http://paulbourke.net/dataformats/ply/
The key component would be to enable writing spheres and cylinders as vertex / face meshes rather than a sphere or cylinder primitive.
https://openknowledgemaps.org/map/54fc08a6e8f059ae89df4d38b0093720
A quick solution might be to only preview a shortened version of the query. Maybe a collapsible label? Shortened with ellipsis + tooltips for the full version?
Manifesto: https://codeisscience.github.io/manifesto/manifesto
I'd like this link to be really prominent (maybe in a button or something) here: https://github.com/yochannah/code-is-science/blob/master/content/_index.md
if you want to pick up this task
- Comment on this issue stating that you intend to work on the task
- When you're ready, add your work to the repo and create a pu
-
Updated
Feb 18, 2020 - R
-
Updated
Apr 16, 2020
We need to refactor a lot of internal code to leverage PyVista over the native VTK dataset adapters and custom array converters that we have in here which should drastically speedup some filters.
The PVGeo.interface
module will need to be completely overhauled to use PyVista as everything in there should have a better implementation in PyVista at this point.
2) Feature Request
Currently the ordering of dimensions described in the schema is in many cases not listed in the documentation. E.g., for ElectricalSeries.data
we should add to the docval that the dimensions are num_time | num_channels
. This would help users avoid errors with ordering of dimensions.
This issue was motivated by #960
This issue is in part also related to #626
CMake Warning (dev) at /usr/share/cmake-3.13/Modules/FindPNG.cmake:51 (find_package):
Policy CMP0074 is not set: find_package uses <PackageName>_ROOT variables.
Run "cmake --help-policy CMP0074" for policy details. Use the cmake_policy
command to set the policy and suppress this warning.
CMake variable ZLIB_ROOT is set to:
/work/dcmjs-build/ZLIB-install
For compatib
Build wasm module
There are no instructions as to what Helm charts and in what order they should be invoked. How exactly do I bring up a production server with these Helm charts?
It would help to have download option to get a list of used packages:
For example:
Download list of ML Packages used in Model Training, with all corresponding citations in CSV Format.
E.g. for listOMLTasks
the Value documentation is quite scarce
Value
[data.frame].
It would be nice to know what the collumns of the data.frame actually are or at least a link to where I can find this.
Since this is an issue that I just ran into myself, can anyone tell me what max.nominal.att.distinct.values means
-
Updated
Feb 7, 2020
Improve this page
Add a description, image, and links to the open-science topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the open-science topic, visit your repo's landing page and select "manage topics."
Description
The ApplyScriptToRemotes script applies a script to all remote modules whose build status reports a successful build.
There are a number of aspects -many of them were already mentioned in PR #781- that could be improved to make the script more robust.