Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.
Sign upGitHub is where the world builds software
Millions of developers and companies build, ship, and maintain their software on GitHub — the largest and most advanced development platform in the world.
Made stream fault handler a plugable component. #5457
Conversation
...leans.Runtime/Streams/PersistentStream/PersistentStreamPullingManager.cs
Outdated
Show resolved
Hide resolved
src/AWS/Orleans.Streaming.SQS/Hosting/ClientBuilderExtensions.cs
Outdated
Show resolved
Hide resolved
8d821f2
to
9088d39
Looks like this needs a rebase |
- used nameof rather than name string during argument checks
1d3c370
to
233d874
@@ -49,6 +50,7 @@ public SiloPersistentStreamConfigurator(string name, Action<Action<IServiceColle | |||
ConfigureComponent<ILifecycleParticipant<ISiloLifecycle>>(PersistentStreamProvider.ParticipateIn<ISiloLifecycle>); | |||
ConfigureComponent<IQueueAdapterFactory>(adapterFactory); | |||
ConfigureComponent<IConfigurationValidator>(PersistentStreamStorageConfigurationValidator.Create); | |||
base.configureDelegate(services => services.TryAddSingleton<IStreamFailureHandler>(NoOpStreamDeliveryFailureHandler.Create)); |
xiazen
Mar 22, 2019
•
Contributor
should this be Singleton or transient? Before this pr, the behavior is more like a transient, meaning stream provider cache the handler factory, and create a new handler each time the pulling agent asks for one. And also , before this pr, failure handler can be initialized asynchronously, see Func<string, Task<IStreamFailureHandler>> StreamFailureHandlerFactory
I bring this up because this is more than a compile time breaking change, it is also a functionality/behavior breaking change,
should this be Singleton or transient? Before this pr, the behavior is more like a transient, meaning stream provider cache the handler factory, and create a new handler each time the pulling agent asks for one. And also , before this pr, failure handler can be initialized asynchronously, see Func<string, Task<IStreamFailureHandler>> StreamFailureHandlerFactory
I bring this up because this is more than a compile time breaking change, it is also a functionality/behavior breaking change,
jason-bragg
Mar 22, 2019
Author
Contributor
For the specific impl "NoOpStreamDeliveryFailureHandler", a singleton is safe.
In the case of a user configured handler, all configured components have been singletons so far, and I think that's fine for this one as well. I understand that this is a deviation from the previous pattern, but I suspect most implementations will not contain per queue state. If they do need per queue state, the application developer can register their impl as a transient named service.
Regarding the loss of asynchronous construction, if an impl needs asynchronous initialization, the pattern we've used in other components (like grain storage) is for them to subscribe to the silo lifecycle and initialize themselves at startup, rather than waiting until they're resolved from the container. I don't think maintaining the previous asynchronous construction (for compatibility reasons) is sufficient to justify deviating from the common pattern we've settled on in other areas of the system for the following reasons.
- I don't think most users have even implemented a failure handler due to the difficulty of doing so prior to this change, so this will be a breaking change for very few.
- I don't think asynchronous initialization will be necessary very often.
- If asynchronous initialization is necessary, it's better to perform it at silo startup than when the component is resolved from the container, as we have better testing and recovery patterns around silo initialization problems.
For the specific impl "NoOpStreamDeliveryFailureHandler", a singleton is safe.
In the case of a user configured handler, all configured components have been singletons so far, and I think that's fine for this one as well. I understand that this is a deviation from the previous pattern, but I suspect most implementations will not contain per queue state. If they do need per queue state, the application developer can register their impl as a transient named service.
Regarding the loss of asynchronous construction, if an impl needs asynchronous initialization, the pattern we've used in other components (like grain storage) is for them to subscribe to the silo lifecycle and initialize themselves at startup, rather than waiting until they're resolved from the container. I don't think maintaining the previous asynchronous construction (for compatibility reasons) is sufficient to justify deviating from the common pattern we've settled on in other areas of the system for the following reasons.
- I don't think most users have even implemented a failure handler due to the difficulty of doing so prior to this change, so this will be a breaking change for very few.
- I don't think asynchronous initialization will be necessary very often.
- If asynchronous initialization is necessary, it's better to perform it at silo startup than when the component is resolved from the container, as we have better testing and recovery patterns around silo initialization problems.
needs rebase. |
/// </summary> | ||
/// <param name="queueId"></param> | ||
/// <returns></returns> | ||
Task<IStreamFailureHandler> GetDeliveryFailureHandler(QueueId queueId); |
JanEggers
Oct 9, 2019
any chance this pr is extended and the whole IQueueAdapterFactory is replaced by DI?
any chance this pr is extended and the whole IQueueAdapterFactory is replaced by DI?
jason-bragg
Oct 9, 2019
Author
Contributor
That's roughly the plan, long term.
The queue adapter was just supposed to be a thin abstraction layer over the underlying queuing system to allow the persistent stream infrastructure to run on a range of queuing solutions. Over time and under delivery pressures it took on more features and complexity, in part because there was no uniform factory pattern in Orleans at the time.
Streams was one of the systems that prompted the adoption of DI, as it necessitated either the development of our own factory patterns or leveraging existing solutions. DI was selected, and we've been (slowly) refactoring streams and other systems over time to leverage it.
There is a fair amount of technical debt in our streaming infrastructure, but the systems work sufficiently well to rarely become the priority. We welcome help cleaning it up! :)
That's roughly the plan, long term.
The queue adapter was just supposed to be a thin abstraction layer over the underlying queuing system to allow the persistent stream infrastructure to run on a range of queuing solutions. Over time and under delivery pressures it took on more features and complexity, in part because there was no uniform factory pattern in Orleans at the time.
Streams was one of the systems that prompted the adoption of DI, as it necessitated either the development of our own factory patterns or leveraging existing solutions. DI was selected, and we've been (slowly) refactoring streams and other systems over time to leverage it.
There is a fair amount of technical debt in our streaming infrastructure, but the systems work sufficiently well to rarely become the priority. We welcome help cleaning it up! :)
No description provided.