-
Updated
May 14, 2020 - Makefile
Azure

Azure is a cloud computing service created by Microsoft for building, testing, deploying, and managing applications and services through a global network of Microsoft-managed data centers.
Here are 5,632 public repositories matching this topic...
Template name: 101-event-grid, https://github.com/Azure/azure-quickstart-templates/blob/master/101-event-grid/azuredeploy.json
The parameter eventGridSubscriptionUrl's description ends with:
(RequestBin URLs are exempt from this requirement.)
This is no longer true, and deployments will fail, if the validation challenge is not answered, or the validation URL is not visited - even when
https://gocloud.dev/howto/blob/open-bucket/#prefix
Comment from @vangent: "I think we can drop this section and just leave it in the godoc. Thoughts?"
Originally posted in google/go-cloud@ff6e56c_
Hi,
what is the process of requesting wrapping of existing provider, we are interested in having Auth0 terraform provider ported to Pulumi.
Please do let me know how the process of adding new providers works.
Thank you.
## Python/Regex fix
This is a reminder for me or a task if anyone wants :P
Basically, The last two questions aren't really regex's questions.
To do:
- Move said questions to correct place.
- Add new regex questions (Python related!)?
- Maybe add a new ## Regex section, as it is a valuable skill
What is the problem?
I've succesfully installed Gitea using the one-click install on a fresh install of CapRover. When I try to install drone-gitea using the one-click Install, I get the following error at the 7th step: "Failed: Error: Request failed with status code 500" .
If applicable, content of captain-definition
file:
N/A
Steps to reproduce the problem:
- Install Gitea
Small feature request. I am using helmfile for the deployment of our k8s infrastructure and wanted to use sops for encryption of secrets. I need to use the --keyservice but as I am calling sops inside a wrapper (helmfile) of a wrapper (helm secrets) I cannot pass this variable to sops in a clean way.
Could you provide an alternative way to provide this option to sops in the .sops.conf and/or i
As a new custodian user, I'm trying to understand the usage of variables in policies. There seems to be multiple types of variables.
A non-exhaustive list for a beginner can be:
vars
in a policy yaml- [standard runtime variables for in
- Explain in notebook/FAQ what non-maxima suppression is what values to set (threshold on IoU)
- Explain and provide code how to pick a good score threshold (reuse Patrick's plot which was implemented for the drone demo)
If you'd like to have your company represented and are using Komiser please give formal written permission below via a comment and email to [email protected].
We will need a URL to a svg or png logo, a text title and a company URL.
🐛 Bug Report
Operating System:
macOS 10.15.3
Docker Image:
budtmo/docker-android-x86-10.0
Docker Version:
Docker Desktop v2.2.0.3
Docker-compose version (Only if you use it):
N/A
Docker Command to start docker-android:
N/A
Expected Behavior
docker build completes without errors
Actual Behavior
An image is build based on budtmo/docker-android-x86-10.0
I think it would be useful to add a mention about Async streams (IAsyncEnumerable) when a developer wants to tackle 'Handle streams of data' problem.
Version
com.microsoft.ml.spark:mmlspark_2.11:jar:0.18.1
spark= 2.4.3
scala=2.11.12
data (csv with header) https://gist.github.com/ttpro1995/69051647a256af912803c9a16040f43a
download data and save as csv file, put into folder /data/public/HIGGS/higgs.test.predictioncsv
val data = spark.read.option("header","true").option("inferSchema", "true").csv("/data/public/HIGGS
The official API docs are here
https://developers.google.com/maps/documentation/geocoding/start
I wanted to know if it would be possible to add this API's Swagger doc (and that of similar Google APIs like this).
-
Updated
May 6, 2020 - Jupyter Notebook
Summary
In preparation for the next milestone release of the Event Hubs client, the documentation which accompanies it should be reviewed for accuracy and any updates which are needed made.
Scope of Work
-
The document comments for the API surface exist and are accurate. Any needed updates to reflect the new retry options approach have been made.
-
The
README
content is up to
The Metric Trigger now supports a "dividePerInstance" boolean to aid with scaling rules based on Storage Queues. This should be exposed in the Terraform API.
Community Note
Please vote on this issue by adding a
Please do not leave "+1" or "me too" comments, they generate extra noise for issue fol
experiment_timeout_hours description should mention that it can be decimal (i.e. 0.25 for 15min)
experiment_timeout_hours parameter's description should mention that it can be decimal (i.e. 0.25 for 15min).
Other than that it initially looks like the minimum value is 1h.
Document Details
- ID: 0bc2b21e-6b1a-cb94-2857-147177a29d7c
- Version Independent ID: d14620a6-a2f6-49f1-632e-73903d41de8
The black theme no longer works on Chrome, the browser complains that:
Uncaught DOMException: Failed to read the 'localStorage' property from 'Window': Access is denied for this document.
This is likely the case because the report is a static/local file.
Currently I cannot find any docs about dependency-watchdog. Currently it seems to be:
- probing the kube-apiserver and scaling down the kube-controller-manager to 0 replicas when the kube-apiserver is reachable internally but unreachable externally
- restarting control plane components in
CrashloopBackoff
once etcd is again available
We create multiple jars during our builds to accommodate multiple versions of Apache Spark. In the current approach, the implementation is copied from one version to another and then necessary changes are made.
An ideal approach could create a common
directory and extract common classes from duplicate code. Note that even if class/code is exactly the same, you cannot pull out to a common clas
This is from the documentation at http://docs.seldon.io/api-oauth.html#actions
The item attribute definition is:
string name [attr_id 1]
string artist [attr_id 2]
enum category [attr_id 3]
double price [attr_id 4]
Where:
category is the enumeration
** (pop [value_id 1], rock [value_id 2], rap [value_id 3])
a range definition is created for the price ** (<10 [value_id 1], 10-20 [value_id
-
Updated
May 12, 2020 - C#
Bug
For want of a better categorisation. The first thing that kube-proxy logs at startup is the following:
W0913 12:02:58.529651 1 server.go:195] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
Environment
- Platform: aws
- OS: container-linux
- Ref: v1.11.2
- Terraform: 0.11.8
- Pl
Hi Team
Can we pass on a feedback to the team that maintains the documentation for the Azure SDK to provide more details which could help customers as the current documentation is cluttered and doesn't have proper verbal explanations of the available apis and methods .
It will be great if we can improve the documentation a bit so that it will be helpful for end customers
regards,
Jay
Scan with variables
Hi :),
It's possible to scan with variables json or .tf file name ?
Like
{
"region": "eu-central-1",
"environment_id": "demo",
"tags": {
"EnvironmentId": "integration",
"ApplicationName": "demo",
"EnvironmentType": "development",
"Project": "pepito"
},
"rds_instances": [
{
"sg_name": "test-sg",
"kms_key_label": "kms",
"rds_label": "rds",
Describe the bug
BaGet must be restarted after installing a new PostgreSQL database due to the following error message:
Npgsql.NpgsqlException (0x80004005): The NpgsqlDbType 'Citext' isn't present in your database.
Work around
Restart BaGet to fix this issue.
To Reproduce
- Install PostgreSQL
- Delete BaGet's database if it exists
- Start BaGet
- Verify the P
Created by Microsoft
Released February 1, 2010
- Organization
- Azure
- Website
- azure.microsoft.com
- Wikipedia
- Wikipedia
Description
Add Azure notebook to our SETUP doc.
I tested google colab and Azure notebook to run reco-repo without requiring creating any DSVM or compute by myself, and it works really well with simple tweaks to the notebooks (e.g. for some libs, should install manually).
I think it would be good to add at least Azure notebook to our SETUP doc, where users can easily test out our repo w/o