Skip to content
\n

When I call like this, I suffer from memory leaks or worker unexpectedly stopped error if the number of customer in customer_id is large. I feel there potentially exists some internal conflict between the parallel function here and something calling inside DLTMAP, but I'm not sure. Does anyone know better ways to make the process more efficient?

\n

I also tried to increase the number of cores specified in the DLTMAP function like below:

\n
            dlt = DLTMAP(\n                response_col=response_col,\n                date_col=date_col,\n                seasonality=48,\n                prediction_percentiles=[0, 0.25, 0.5, 1, 99, 99.5, 99.75, 100],\n                global_trend_option='flat',\n                cores = 60\n            ) \n
\n

But unfortunately, I found out that it wasn't helpful, the running time for dlt.fit() is almost exactly the same no matter I specify cores = 1 or cores = 60. Did I miss anything here? Any feedback is appreciated. Thank you!

","upvoteCount":1,"answerCount":1,"acceptedAnswer":{"@type":"Answer","text":"

Hi @cliu-sift I don't think the arg cores help in MAP since it is a single chain optimization. Meanwhile, i wonder if you could leverage package like joblib in that case. @wangzhishi @ppstacy

","upvoteCount":1,"url":"https://github.com/uber/orbit/discussions/504#discussioncomment-1156955"}}}

Expedite the DLTMAP from orbit #504

Answered by edwinnglabs
cliu-sift asked this question in Q&A
Discussion options

You must be logged in to vote

Hi @cliu-sift I don't think the arg cores help in MAP since it is a single chain optimization. Meanwhile, i wonder if you could leverage package like joblib in that case. @wangzhishi @ppstacy

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@edwinnglabs
Comment options

Answer selected by edwinnglabs
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants