Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.
Sign upGitHub is where the world builds software
Millions of developers and companies build, ship, and maintain their software on GitHub — the largest and most advanced development platform in the world.
Data race on _pplx_g_sched_t::get_scheduler() (#1085) #1087
Conversation
dimarusyy
commented
Mar 26, 2019
|
- use C++11 compatible implementation
I'm going to submit an alternate fix for this, if that's okay? |
} | ||
|
||
return m_scheduler; |
BillyONeal
Mar 26, 2019
Member
Bug, needs to return sptr.
Bug, needs to return sptr.
{ | ||
switch (m_state) | ||
{ | ||
sched_ptr sptr = std::atomic_load_explicit(&m_scheduler, std::memory_order_consume); |
BillyONeal
Mar 26, 2019
Member
Not sure how I feel about memory_order_consume; even SG1 thinks that thing is effectively broken. Any objections to changing this to acquire?
Not sure how I feel about memory_order_consume; even SG1 thinks that thing is effectively broken. Any objections to changing this to acquire?
dimarusyy
Mar 26, 2019
•
Author
Contributor
consume
and acquire
should be equivalent here as atomic is not used for synchronization. Anyway, I'll update with acquire
.
consume
and acquire
should be equivalent here as atomic is not used for synchronization. Anyway, I'll update with acquire
.
// This case means the global m_scheduler is not available. | ||
// We spin off an individual scheduler instead. | ||
return std::make_shared<::pplx::default_scheduler_t>(); | ||
::pplx::details::_Scoped_spin_lock lock(m_spinlock); |
BillyONeal
Mar 26, 2019
Member
As far as I am aware, all the major shared_ptr implementations actually use spinlocks to implement this load operation anyway; perhaps we should just drop the check and take the spinlock every time. It does limit scalability of the system but this shared_ptr city wasn't going to win awards for that anyway.
As far as I am aware, all the major shared_ptr implementations actually use spinlocks to implement this load operation anyway; perhaps we should just drop the check and take the spinlock every time. It does limit scalability of the system but this shared_ptr city wasn't going to win awards for that anyway.
dimarusyy
Mar 26, 2019
Author
Contributor
Am I right that you suggest to drop double-lock here and just acquire spinlock on every call to get_scheduler()
?
Am I right that you suggest to drop double-lock here and just acquire spinlock on every call to get_scheduler()
?
dimarusyy
Mar 27, 2019
Author
Contributor
I'm fine as you accepted spinlock acquiring on every call to get_scheduler()
. Thanks for comment and update.
I'm fine as you accepted spinlock acquiring on every call to get_scheduler()
. Thanks for comment and update.
Can you look at this alternative and let me know if that makes you happy? |
I mean this alternative: #1088 |