A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://github.com/dotnet/reactive/discussions/2038 below:

Future Rx.NET Packaging · dotnet/reactive · Discussion #2038 · GitHub

Skip to content Navigation Menu Search code, repositories, users, issues, pull requests...

Saved searches Use saved searches to filter your results more quickly

Sign up You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. Dismiss alert Future Rx.NET Packaging #2038

Nov 13, 2023 · 17 comments · 49 replies

{{actor}} deleted this content .

-

Update 2025/08/01: please see #2211 for an update on progress, detail on prototypes, and discussion about options now under consideration.

We think that it might be necessary to change how Rx.NET is packaged in NuGet in a future version. We would much prefer not to do this, but the alternative appears to be a choice between leaving a major problem unfixed, and breaking backwards compatibility.

The Major Problem we Want to Fix

One of our main goals for the next version of Rx is to fix the problem where taking a dependency on System.Reactive can implicitly give you a dependency on all of WPF and Windows Forms, which can in turn cause deployments to become tens of megabytes larger than they need to be.

See AvaloniaUI/Avalonia#9549 for an example of this kind of problem.

This is an unfortunate consequence of a decision made in Rx v4. The "great unification" in that release meant that you needed only a single NuGet reference to System.Reactive to get all of Rx. This was much simpler than previous releases. (More subtly it also happened to fix some problems that used to occur in plug-in systems.) At the time it was introduced, this did not cause the problem just described for the simple reason that WPF and Windows Forms were only available in .NET Framework. If you targetted .NET Framework you got a version of Rx.NET that included WPF and Windows Forms support. And if you targetted .NET Core, you got one that didn't.

Where it all went wrong was when .NET Core 3.1 added WPF and Windows Forms support. The difficulty arose because not all .NET Core 3.1 systems have these frameworks. What was Rx to do? The solution it adopted was to look at your application's target framework moniker. If you had a windows-specific TFM targetting a sufficiently recent version (e.g. net5.0-windows10.0.19401) then it would give you the version of Rx with WPF and Windows Forms but otherwise it would not.

But this makes no sense for UI applications using other UI frameworks (e.g. Avalonia). Those have no need for Rx's WPF and Windows Forms features.

It becomes particularly disastrous if you create a self-contained deployment, because this will now include all of the WPF and Windows Forms library component. This tends to make your application about 30MB bigger than it needed to be.

What needs to happen

It needs to be possible for applications targetting Windows-specific TFMs to be able to use Rx without ending up with a dependency on WPF and Windows Forms. Dependencies on Rx's WPF and/or Windows Forms functionality should be an opt-in feature.

Implications for packaging

There are essentially two options:

  1. Modify System.Reactive so that it no longer automatically includes WPF and Windows Forms features
  2. Deprecate System.Reactive and introduce a new package to be used as the new "main" Rx package

In either case, WPF and Windows Forms features would move out into new NuGet packages. (With 2, there is the option to have the deprecated System.Reactive comprise type forwarders to the new version, but it doesn't necessarily have to work that way.)

What we'd like to do but can't

If we thought it was possible, we'd prefer option 1 above: to remove all UI-framework-specific features from System.Reactive. The API surface area that you get today (Rx v6) if you target net6.0 has absolutely no UI-specific functionality. Ideally we'd like that to be the case no matter what your target framework is.

If we were able to rewrite history, we'd make it so that Rx v4 had worked this way, and that the WPF and Windows Forms (and UWP) features were always out in separate components. But obviously we can't do that. And unfortunately, that's why we don't believe we can do this in Rx 7.0 without breaking backwards compatibility in a serious way.

To understand why, consider an application MyApp that has two dependencies (which might be indirect, non-obvious dependencies) of this form:

Now suppose MyApp needs to upgrade to a newer version of NonUiLibB. What if this new NonUiLibB now depends on Rx 7.0, and uses a V7.0-specific feature. (E.g., imagine for the sake of argument that we add a RateLimit operator to deal with the fact that almost nobody likes Throttle)

So we now have this situation:

So what will happen if we have modifed System.Reactive so that it no longer automatically includes WPF and Windows Forms features? Well either MyApp will load System.Reactive v7, which won't have DispatcherScheduler (because that's a WPF feature), so UiLibA is going to get a MissingMethodException. Or the app will load an older System.Reactive in which case Observable.RateLimit won't be present, so NonUiLibB will get a MissingMethodException.

You can't fix this by also adding a reference to, say, System.Reactive.Wpf (or whatever we call the package that contains DispatcherScheduler) because UiLibA won't be looking for DispatcherScheduler in that library. It will expect it to be in System.Reactive.

This is a bad situation to put MyApp into. It can't work around this (except by sticking with the old version of NonUiLibB, or by dropping either UiLibA or NonUiLibB). The particularly insidious thing about this is that the problem is caused entirely in code not owned by MyApp, so if Rx v7 went in this direction, applications would start finding themselves in this situation and not be able to do anything about it.

You might think we can solve this by adding type forwarders in System.Reactive so that libraries such as UiLibA that are looking for DispatcherScheduler in that library will be told where it really is. The reason this doesn't work is that doing so requires System.Reactive to have a dependency on whatever library now contains the real DispatcherScheduler. That means that a reference to System.Reactive would implicitly mean you also get a reference to the WPF and Windows Forms libraries. So we're back at the problem we were trying to solve.

What we think we will have to do (reluctantly)

We think the only way forward is to deprecate System.Reactive, and to introduce a new NuGet package that replaces it as the 'entry point' for Rx.

We hate the idea of doing this. Rx.NET has already been through several confusing iterations of its packaging solution. If we knew of a way to solve AvaloniaUI/Avalonia#9549 while retaining System.Reactive as the main Rx package, without breaking existing applications that depend on Rx, we would prefer that. But so far, every proposal we know of to do this causes problems that might put existing applications into a state where they are stuck on old versions of libraries.

UPDATE 2023/11/21: A possible alternative

Thanks to a question from @heronbpv, we did some further investigation and may have come up with an alternative. The full explanation is at #2038 (reply in thread) but in summary: it looks like the use of DisableTransitiveFrameworkReferences can work around this problem today (even in Rx 6.0.0). If that proves to be viable for affected parties, then that somewhat reduces the urgency around fixing this.

If this does work out, we might be able to move more slowly towards the state in which UI-framework-specific functionality is fully separated out. We would likely still create separate NuGet packages for each UI framework, but leave the existing types in place in System.Reactive, marking them as [Obsolete]. And then, several years from now, we could finally remove them completely.

Should we fork Rx?

It has been suggested (e.g. see #2034 (comment) and the discussion following that comment) that there is an opportunity for a "clean break" here. There are at least two (incompatible) views on what that might mean, including:

  1. don't bother with backwards compatibility for System.Reactive—remove the UI-framework-specific parts, and just declare that v7+ doesn't have these, leaving it for existing applications to try to fix up the consequences of this
  2. introduce what is effectively a completely distinct new Rx.NET that is in completely different packages; new functionality goes into these new packages, with "old Rx" only getting critical bug fixes

As I understand it, a motivation for this is to enable more innovation. I don't currently have a clear idea of what changes we might make in this new scheme that we currently cannot, although one possible area would be to separate out the parts of Rx that are not well suited to trimming. (There's some code relating to IQbservable where we couldn't find entirely satisfactory ways of annotating it for trimmability, and you tend to fall off a cliff in terms of binary size if you start using that.)

I currently think that if we were going to do this, we'd need to build up a shopping list of the big changes we think we'd want to make to take advantage of this "clean break" because once we've pulled that trigger (if we do it) the opportunity to make significant changes is now in the past.

What do you think?

We've opened this discussion because we expect people to have opinions on this, and we hope that people might be able to design solutions to this that haven't occurred to us—perhaps there is a better way that we've just not seen yet. Please let us know what you think!

Beta Was this translation helpful? Give feedback.

You must be logged in to vote

All reactions

-

#2034 illustrates one possible implementation. It makes the following choices:

To be clear, we've not decided that we will do it this way.

This is just a prototype to show how one solution to this problem would look, and to verify that it does indeed fix the 30MB installer problem that we're setting out to solve. (It does.) We don't much like the name System.Reactive.Base, but System.Reactive.Core is already taken,

Beta Was this translation helpful? Give feedback.

You must be logged in to vote

1 reply

-

same as my thought. totally agree with it.

Beta Was this translation helpful? Give feedback.

-

How widespread is the use of DispatcherScheduler? What are the chances of UiLibA never updating to look for DS somewhere else?

I presume if UiLibA is some kind of open source itself, they will adapt (or forked). If proprietary, well, whoever does that may be inclined to keep supporting their customers.

As for the clean start, there would be a lot of clashes due to the use of extension methods, right? For example, if old Rx and new Rx needed to be interoperated, whose Select() will be applied on that IObservable?

Beta Was this translation helpful? Give feedback.

You must be logged in to vote

5 replies

{{actor}} deleted this content .

-

I have an idea.

How about turning DispatcherScheduler into a delegator with no real implementation but forwarding into an IScheduler. The default behavior would to throw UnsupportedOperationException. A working DispatcherScheduler in some other library & package name would then register itself with the Rx' version. Thus updated UiLibA would simply use the new library DS indirectly. If the UiLibA was not updated, the App that uses it and the newer Rx would then just issue that registration.

Beta Was this translation helpful? Give feedback.

-

Unless UiLibA somehow uses new DispatcherScheduler(...). Yeah, types would be broken.

I presume changing the signature to new DispatcherScheduler(object dispatcher) wouldn't bind.

Beta Was this translation helpful? Give feedback.

-

How widespread is the use of DispatcherScheduler?

I don't know how we would obtain data that could answer that question. GitHub reports about 1900 files using it on GitHub, but I'm not sure that's a very illuminating answer to your question.

What are the chances of UiLibA never updating to look for DS somewhere else?

I've seen enough examples of projects stuck on some version of a component (for various reasons, including the component having essentially fallen out of support but there not being any realistic alternative) to have come to the view that it is problematic to hope that dependencies will simply always be updated. (And even when they are, it's often not on the schedule you'd want.)

In conclusion, my answer to this question is: I think it's probable that the changes are high enough to matter. (Rx is pretty widely used, so even if a small fraction of users were to be affected, that's a big impact. And I'm unconvinced it's going to be all that small a fraction.)

Rx has historically maintained pretty high standards of backwards compatibility. So for that reason we don't really know what the impact of a breaking change will be.

As for the clean start, there would be a lot of clashes due to the use of extension methods, right?

That is a very good point. I had been thinking that because most of what Rx does is most naturally used as implementation detail—the number of types that it defines that you would likely make part of your own class's public API is fairly small, and the two most important types, IObservable<T> and IObserver<T> are defined in the runtime libraries.

But I hadn't thought about what happens when some random dependency deep in your tree brings in "the other Rx". That would be a major problem.

So that rather suggests that a complete split would be a mistake, and that we'd want to unify to exactly one version of Rx.

Beta Was this translation helpful? Give feedback.

-

How about turning DispatcherScheduler into a delegator with no real implementation

The problem with that is that its public-facing API includes types defined by WPF. So even with a hollow shell, you still end up with a dependency on WPF.

Beta Was this translation helpful? Give feedback.

{{actor}} deleted this content .

-

But I hadn't thought about what happens when some random dependency deep in your tree brings in "the other Rx". That would be a major problem.

@idg10 I can confirm this is a huge problem, I tried such a workaround for moving away from the old v2.2.5 binaries and found that there was no way to run away from it, since any little package left unchanged would break it back again. The only alternative was to use a different TFM as I suggested below.

So that rather suggests that a complete split would be a mistake, and that we'd want to unify to exactly one version of Rx.

Completely agree, I think it would not be enough to just fork the package, you would need to fork the namespace itself. That is exactly what I think would create massive confusion within the community.

So even with a hollow shell, you still end up with a dependency on WPF.

We need to cut these out, the current architecture is completely untenable. It is very sad that it got to this state, but I don't think we need to give up on change. @akarnokd see my proposal below, I would be curious to hear your feedback.

Beta Was this translation helpful? Give feedback.

-

You can't fix this by also adding a reference to, say, System.Reactive.Wpf (or whatever we call the package that contains DispatcherScheduler) because UiLibA won't be looking for DispatcherScheduler in that library. It will expect it to be in System.Reactive.

This is a bad situation to put MyApp into. It can't work around this (except by sticking with the old version of NonUiLibB, or by dropping either UiLibA or NonUiLibB). The particularly insidious thing about this is that the problem is caused entirely in code not owned by MyApp, so if Rx v7 went in this direction, applications would start finding themselves in this situation and not be able to do anything about it.

I think this is possibly more tolerable than you anticipate. This wouldn't break old versions of people's apps, only new versions when they upgrade. I think it's ok to introduce breaking changes, especially considering the benefits, and advantages over alternative options. Perhaps it's the point of this discussion, but maybe a survey would help? Let your users tell you whether they think this approach is a problem, you might find that most are ok with it.

As an example, the last few versions of MediatR have introduced breaking changes. These have been potentially frustrating, but ultimately they are significant improvements and I think users appreciate them.

Beta Was this translation helpful? Give feedback.

You must be logged in to vote

11 replies

-

And I know I'd rather see a consolidation of community contributions to the official repo to help drive the project forward, rather than seeing it fracture and that effort become diluted...

Beta Was this translation helpful? Give feedback.

-

On the contrary, it's at the heart of the proposed change, unfortunately.

Sorry, you're right - what I meant to say is, the problem of updating dependency chains is not unique to this library, or to this change.

Beta Was this translation helpful? Give feedback.

-

@idg10

Even "type 2" changes as described in my earlier post? (That is, the kind where application authors simply can't fix the problem, except by stopping using whichever components are putting them in this situation.)

If I were in this scenario, yes, I wouldn't hold it against System.Reactive for introducing a Type 2 scenario, if it means I'm no longer forced to pull in other large dependencies, I may not need. I actually am in a similar situation at work right now, We're gradually refactoring out our dependency on a third party library that's holding us back from updates to quite a few other libraries, and most-significantly, .NET Framework itself. I don't hold it against the folks that created .NET Core and .NET 5+ that this other third-party library isn't updating to match. We're just gradually removing our dependence on this library.

Also, to be clear, I didn't mean to imply that these changes are not breaking, by putting "breaking changes" in quotes. I'm honestly not sure why I did.

  1. do you think it's necessary to break backwards compatibility in order to get a "sane dependency tree"?

My understanding was that all the proposed solutions would be breaking backwards compatibility in some way. Your illustration with regard to type-forwarders seems to be what I was missing. I'm actually not familiar with type forwarders, so that was very helpful. This setup does need to come with some clear documentation, for newcomers to the ecosystem, then, because it is introducing more complexity to developers and the dependency-resolution process.

  1. do you think that there's some specific way of breaking backwards compatibility that guarantees that we end up in a better place?

So far, my favorite idea was (coincidentally, I promise) the one I suggested here, assuming it works. It also looks like you mentioned the same idea here. I like it because it follows a "normal" process of introducing breaking changes pretty closely: Changes are presented directly to developers, as they write code, while also providing lead time to address them.

So even if you don't care about back-compat scenarios, it doesn't looks like we have very much to gain here from a breaking change.

With everything you've presented, I think this is a fair assessment.

Beta Was this translation helpful? Give feedback.

{{actor}} deleted this content .

-

@JakenVeina What I don't like about the current proposal is there is no plan to ever phase out the bloated packages and therefore no incentive for current library maintainers to stop developing the way they have so far. Will we continue to maintain type forwarders for UI schedulers indefinitely? Will we continue to maintain System.Reactive.Core undeprecated indefinitely? Note that deprecating a package on NuGet.org comes with expectations that the package will stop being updated, so we would have to stop shipping releases of System.Reactive forever.

@idg10 has mentioned elsewhere that because Rx uses the System namespace it somehow must hold itself to a high regard in terms of backwards compatibility. I think this is wrong-headed because that is not how even .NET itself is behaving at the moment. This was true of .NET framework, but you only need to have a cursory look at breaking changes introduced since the release of .NET 6:

The very fact that we have these long lists of binary breaking changes should be a big clue about how Microsoft is dealing with evolving their ecosystem. Even just the first item on this list already raised some huge backlash from the community and is holding back and causing refactoring of entire projects across the .NET ecosystem including core plotting libraries. And yet here we are with .NET 8 and the world hasn't ended.

Microsoft have repeated the mantra time and again that they will break things for the sake of sanity, and that consumers are given 3 years to adapt between LTS releases. Right now I am assuming there is no promise of stable binary compatibility across .NET TFMs so even if System.Reactive stays binary stable your app can still break.

I don't understand why given this we are assuming that System.Reactive needs to somehow hold itself to a higher standard than the core framework itself.

There is a namespace called System.Reactive, a package called System.Reactive, and already four other packages that do type forwarding. Do we really need to create confusion by adding a 5th package that has nothing to do with this namespace that somehow is not even a type forwarder itself?

I don't buy the argument of counting nodes in a "dependency tree" as a measure of complexity. Humans often care more about names than they care about numbers. My definition of "sanity" is not just that we have one package, it's that the root of the namespace System.Reactive should not be in a package called System.Reactive.Base.

I don't understand why we are all giving up when the tools are there to deal this gracefully, with respect for both the existing community and new developers. As it stands, I cannot say I am looking forward to the day where Rx 101 starts with "first install the System.Reactive.Base package (DO NOT INSTALL THE System.Reactive PACKAGE)"...

Beta Was this translation helpful? Give feedback.

{{actor}} deleted this content .

-

there is no plan to ever phase out the bloated packages and therefore no incentive for current library maintainers to stop developing the way they have so far. Will we continue to maintain type forwarders for UI schedulers indefinitely?

Were you aware that we Rx has already been doing this for about 8 years?

Do you know the names of the existing type forwarders that Rx has been maintaining for all that time? If not, can you honestly say they are a problem? And even if you do know what they are, do you believe their continued existing is causing a problem? I've not been aware of their being a problem, but perhaps there's something I don't know.

Would it be nice to get rid of them? Sure. It's one less moving part. But as far as I know they're not really a big deal, and I have no way of knowing what the impact would be of dropping them.

Re: .NET and breaking changes

I've not been through every one of the changes in the lists you linked to, but I did systematically go through quite a lot of them to see if I was missing something. I have a few observations.

First of all, the .NET team evidently has their "breaking change" detector set to a much more sensitive level than the Rx project does, figuratively speaking. A great many of the things they label as "binary breaking changes" are things we probably wouldn't think of as breaking changes in Rx, because they won't break most people in most circumstances. (That makes these kinds of changes very, very different from something as radical as removing something from a public-facing API without first going through some fair-warning period of Obsolete annotation.)

E.g., look at the various "binary breaking changes" to FileStream. I followed some of that work while it was in progress. All those changes were agonised over in multiple review meetings with many contributors where a huge amount of effort was put into working out how to minimize the possible impact and to be certain that the pay-off was large. The goal was that it should effectively be binary-compatible in practice. And they also provided a mode where you could tell it to use the old behaviour just in case the new behaviour caused you a problem. (Some of the changes in that list of "breaking changes" relate to the fact that the fallback mode is now being phased out after several years of customer using demonstrating that the changes were indeed binary-compatible in practice for almost everyone.)

The effect is that for the overwhelming majority of consumers, those FileStream changes won't break any existing code. It is nonetheless labelled as a set of binary breaking changes according to those docs. So, far from being an example of Microsoft's readiness to impose breaking changes, I'd say that's actually an excellent example of the lengths they go to in order to preserve backwards compatibility even when they make what is, according to the very exactly criteria they set themselves, a breaking change.

Quite a few more of these changes look like various stages of multi-year moves to make things obsolete. (Quite a few relate to the security-driven decision to gradually stop people using CLR serialization.)

As far as I can tell, most of the things labelled as binary breaking changes there don't come close to the destructive impact of the kinds of changes we're discussing here.

Another observation is that a lot of the changes seem to relate to evolution of application frameworks (e.g., ASP.NET Core or Windows Forms). I think there's a major qualitative difference between evolving the design of a framework, and introducing a breaking change to something like LINQ to Objects. Chosing a framework is a top-down decision not taken lightly. The use of a library feature is something other libraries will often impose on you. This makes them quite different things. And I'd ask you: which do you think Rx is more like? ASP.NET Core or LINQ to Objects? I think it's more like the latter. Rx is core library functionality, not a framework, and although there are some library types in that list of changes, the ones I looked at were all more like the FileStream changes: labelled as binary breaking changes, but with every effort made to ensure that they won't actually break most consumers in practice.

I don't understand why given this we are assuming that System.Reactive needs to somehow hold itself to a higher standard than the core framework itself.

We aren't.

We hold ourselves to a much lower standard.

We have to, for the simple reason that we just don't have the resources to take the care and attention that the .NET team does when making changes of the kind they did to improve FileStream performance.

I don't believe I ever said we had to exceed or even equal the standards set by the frameworks (standards which I think you have unfairly discounted in your argument). What I did say is that because we're in the System namespace and because of what we've done in the past, people will expect:

higher standards of backwards compatibility than with the average NuGet package.

I did not say we're going to go to the same lengths as the .NET team. Just that we should do better than average.

As those links you've posted show, the .NET team will label some things as a "binary breaking change" if it's merely an observable change in behaviour. That's actually a much higher standard than the one I'm aspiring to in this discussion. By that standard, #1968 and #1882 are both binary breaking changes, but I'm OK with that. Those are on the "internal implementation details of FileStream are now slightly different" level.

I think it's really important to recognize that removing things from the public API of System.Reactive without having first gone through any phase of Obsolete is a huge act of violence compared to the vast majority of the changes in those lists you link to. Compare it to, say, https://learn.microsoft.com/en-us/dotnet/core/compatibility/core-libraries/7.0/filestream-compat-switch and it seems absurd to put these in the same category.

It is, in my view, a mistake to consider all "binary breaking changes" as being essentially the same sort of thing. I therefore do not think the fact that .NET itself has made a lot of binary breaking changes gives Rx an excuse to do things that we know will definitely break things for our users.

In summary, I've tried to clarify that my view on the need for backwards compatibility is not quite as extreme as you had thought. I have always been aware that we're not on the same level as the .NET team, but stand by my view that we need to be better than average.

Beta Was this translation helpful? Give feedback.

-

@idg10 thanks for creating this discussion, I think we should really exhaust all possibilities here to make sure we get the best decision for the community. I wholeheartedly agree with your option 1). By default we should try to save System.Reactive as much as possible. I feel strongly that forking will create confusion, even if the package is deprecated (will the new package introduce a new namespace?).

For completeness, I would like to submit to discussion a hybrid alternative first suggested in #2034 (comment) that I think does go a little bit in the direction of what you would like to achieve ideally.

Essentially, consider the possibility of Rx v7 being a multi-TFM package (same as today) with net472, net6.0 and net8.0, but where to keep full backwards compatibility with existing code, the net472 targets and net6.0 targets would keep existing behavior and bundle WPF and WinForm schedulers. For the net8.0 TFM, we would take them out and ask people to install the new platform-specific packages. We would both bump the major version of Rx to signal the breaking change and introduce the new TFM for .NET 8.0 (which would be nice anyway since it's just coming out).

The reason this is somewhat nice is that TFMs are still one of the most used and well understood ways of controlling the dependency graph on NuGet. If you say your app needs to run on .NET 7.0 then you simply can't pull .NET 8.0 dependencies by mistake.

This means that your scenario of irremediably breaking something by accidentally upgrading the Rx.NET package won't happen unless you also at the same time update the TFM for your app. For me this is actually a big deal, since I don't think people change TFMs lightly, especially on client-side apps. If they do, dependencies usually need to be properly audited anyway (exactly because dependency graphs can change) and I don't think people should be surprised to find things can break: after all that is the point of versioning dependency trees by TFM.

You might say this is not strictly backwards compatible because of platform unification during dependency resolution stage, but to me this is still better than the other two extreme options in terms of combining compatibility with a clean break. This way someone creating a new .NET 8.0 app today will finally have a sane starting point. At the same time someone maintaining an older app can still feel confident upgrading the Rx package to the latest version, since it won't break their dependency tree as long as they keep to their existing TFM.

This seems like an acceptable compromise, as it would give even people on existing TFMs a chance to start upgrading and more easily explore the possibility of bumping TFMs by testing and identifying any problems, while contacting any legacy project maintainers or forking problematic dependencies throughout the coming year.

Beta Was this translation helpful? Give feedback.

You must be logged in to vote

6 replies

{{actor}} deleted this content .

-

the net6.0 target does not in fact bundle WPF and Windows Forms schedulers. Those are only in the net472 and net6.0-windows10.0.19041 targets. The existing net6.0 target already looks exactly like you are proposing.

So existing .NET 6.0 apps would get the same behaviour as today. .NET 8.0 applications would find that types such as [System.Reactive]::ControlDispatcher no longer exist, which I believe is what you intend.

@idg10 that is great news and definitely sounds like it would do exactly what I was hoping.

But my example doesn't have them using a legacy package. The app in that example uses the current version of Rx. Rx 6.0 is not legacy. It's the only choice you have today if you want to use Rx in green field development of a .NET 8.0 app.

I understand this concern, but this is actually then a different question: "is .NET 8.0 the right TFM to introduce this change?".

For that particular question, I am willing to concede that maybe you are right and now is not the right time, since we were not ready in time for the release which is now passed, people can use it and they will build up legacy. But we can still use the exact same approach for .NET 9.0.

In the meantime you can still announce the intention to remove platform-specific TFMs from System.Reactive for .NET 9.0 and consider introducing some preview packages like @matt-goldman suggested.

As the main maintainer you would ultimately make that call, but it looks to me now that this strategy is the strongest option, especially given everything you outlined above.

Beta Was this translation helpful? Give feedback.

{{actor}} deleted this content .

-

To summarise and check if I understand correctly, the logic if we wanted to do this today would be to release Rx v7 with the following TFMs:

The last TFM would be literally just a replica of net8.0 but is actually crucial if I understand unification correctly.

You need it since applications targeting net8.0-windows might fallback to net6.0-windows if the newer target is not there and thus retain the legacy.

I hope that in later releases, e.g. .NET 9.0 we would be able to drop the windows TFMs entirely since any fallbacks would get the same assembly, but I don't know the details.

Either way this seems workable unless I am missing something?

Beta Was this translation helpful? Give feedback.

-

but this is actually then a different question: "is .NET 8.0 the right TFM to introduce this change?".

Is there any TFM in which this change wouldn't cause all these same problems?

I think the right TFM to fix the underlying problem would have been .NET 6.0 because this has already been causing pain for some time, and has caused at least one major project to abandon Rx.NET. The idea of deferring it for another version or two is not one I much like, so I think there would need to be a pretty strong upside.

So let's think the scenario through to see what the upside of deferring this fix might be. Suppose today we mark ControlDispatcher, CoreDispatcher, and the other problematic types as deprecated. We ship Rx 7.0 in that state, and the basic problems persists, but by having signalled that these types are deprecated, we have the way for some future version in which those types go away after however long we think is the minimum acceptable time between deprecation and outright removal. In the meantime, we ship equivalent functionality in new types in separate components, so people have a way to move forward.

The upside, I believe, is that at some point in the future we get to ship the System.Reactive that we always should have shipped (i.e. one that has no UI-framework-specific types, and no OS-specific TFMs).

That's undoubtedly a good upside. But the question then is: how long is long enough?

Elsewhere you've raised issues dating back to Rx 2.2.5, a version that shipped about 9 years ago. If that version is still relevant to this discussion today, what's the earliest you think we could actually release a version of System.Reactive in which we had removed these types?

Beta Was this translation helpful? Give feedback.

{{actor}} deleted this content .

-

That's undoubtedly a good upside. But the question then is: how long is long enough?

@idg10 To answer this question I feel I need to understand and qualify a bit better the nature of a "type 2" change. Specifically, I need to understand whether we agree on the "target audience" for a "type 2" change.

I only see a way out if we are willing to accept first that the target audience for "type 2" changes are maintainers of existing applications. If this is the case, then I submit that the right TFM to introduce this fix would be any TFM that is not released right now, since by definition there can't be maintainers of existing applications on TFMs that do not exist yet.

If on the other hand the target audience for "type 2" changes are all current and future developers of both existing and yet to exist applications and TFMs, then I agree with you there seems to be no hope for a clean break on System.Reactive. Below I will talk only of my "hopeful" scenario.

.NET 8.0 is now released, so let us call net8.0 the existing TFM and net9.0 the hypothetical future TFM. Imagine a developer of an existing .NET 8 app targeting net8.0-windows10.0.19041 that started using Rx this month and is pulling in a bunch of dependencies using Rx v6.

Let us say Rx v7 provides a net8.0-windows10.0.19041 with UI schedulers for WinForms and WPF, etc, and that this will be the last TFM that Rx targets with the windows suffix, i.e. we never release a net9.0-windows10.0.19041 at all and we release Rx v8 synchronized with .NET 9.0 next year with only a net9.0 TFM (plus the legacy ones, including net8.0-windows10.0.19041). What will happen to both our legacy maintainer and developer of a brand new app?

First, I think we can agree that as long as the legacy maintainer does not change his TFM, nothing will break.

What I was still curious about is whether we could really free developers of new apps entirely from the "curse of schedulers" by dropping the platform specific TFMs. Can we really say for sure that if a new developer targets net9.0-windows10.0.19041 they won't pull net8.0-windows10.0.19041 inadvertently through the compatibility unification mechanism, even though there is a net9.0 only target?

To answer this question I made a mock project implementing this idea with .NET 6 and .NET 7 (just because I have them handy). Here are the relevant bits:

Mock System.Reactive v7 targets

<TargetFrameworks>net6.0;net6.0-windows10.0.19041;net7.0</TargetFrameworks>

As a control, I introduced an API surface exclusively for the net6.0 target and another for net7.0 targets so I could check which ones would be available at runtime. I also created a test dependency that targets only specifically net6.0-windows10.0.19041 to see how indirect dependencies would be resolved.

namespace System.Reactive
{
#if NET6_0
    public class DeprecatedApi {  }
#endif
#if NET7_0_OR_GREATER
    public class ShinyNewApi {  }
#endif
}

I started out with a "legacy app" targeting net6.0-windows10.0.19041 and verified that obviously it cannot access ShinyNewApi since it only exists on .NET 7.0. I then modified it so that it targets net7.0-windows10.0.19041 to check whether ShinyNewApi (which is exclusive to net7.0) gets pulled or whether dependency resolution would somehow prefer the net6.0-windows10.0.19041 target because of platform specificity.

As I hoped, the net7.0 TFM was preferred and ShinyNewApi is available. This is good since it is at least confirmation that a clean break for new developers targeting directly Rx vNext on the next TFM would free them from the "curse of schedulers". What about indirect dependencies? Unfortunately here the story gets complicated as probably @idg10 was already aware and trying to explain, but it wasn't entirely clear to me.

First of all, the main thing to notice is that the most specific TFM (in this case net7.0) is always preferred. This is actually the version of Rx that gets pulled so the clean break is enforced and cannot be overridden by any indirect dependency. This is good news. The bad news is that the indirect dependency that targets the older TFM (net6.0-windows10.0.19041) is still allowed to be pulled into the main app, even though it's indirect dependency is resolved with a package that does not support the stated TFM. This is unfortunate, and I think potentially even an issue with NuGet dependency resolution, but ultimately let's accept it and look at it one more time from the point of view of our stated "target audience".

I think this is honestly the best scenario I can conceive of for a clean break that preserves System.Reactive as the base package. If the assumptions I held about the target audience are incorrect and undesirable for the community, then I agree I cannot see any other way.

That's undoubtedly a good upside. But the question then is: how long is long enough?

As a rule of thumb I would use Microsoft's own end-of-support dates as deprecation targets, so if we introduce the break in .NET 9, the back-compatible TFMs would be around until .NET 8 end-of-support, which is currently stated as November 10, 2026, so 3 years from now.

Elsewhere you've raised issues dating back to Rx 2.2.5, a version that shipped about 9 years ago. If that version is still relevant to this discussion today, what's the earliest you think we could actually release a version of System.Reactive in which we had removed these types?

If anything, I think our story is an argument for why these kind of problems are not the end of the road that they are often made out to be. Our application is still going strong and we have a roadmap for changing into modern .NET.

Rx has historically maintained pretty high standards of backwards compatibility.

Our story shows this to be untrue, since we were effectively split up from the entire Rx community without even the possibility of using type forwarders (because of the arbitrary strong name change). In fact, if anything, the strong name key change showed that Microsoft does not consider the use of the System namespace by itself to be worthy of signing an assembly with its public strong name key. Not sure what that means in terms of expectations and aspirations for Rx.NET.

In the end what I believe can save us is the fact that the library is open-source and evolving, and not any kind of implicit contract on the immutability of API surface areas. For more examples see also #2038 (reply in thread).

Beta Was this translation helpful? Give feedback.

-

I need to understand whether we agree on the "target audience" for a "type 2" change.

I only see a way out if we are willing to accept first that the target audience for "type 2" changes are maintainers of existing applications.

Sadly, This isn't necessarily the case. Someone could start writing an application later this year (let's say March 2024) targeting .NET 6.0 (and yes, I know, but people do this, sometimes for unavoidable reasons—Azure functions .NET 8.0 support has some limitations right now for example) and then take a dependency on some other component, let's call it ProblemSoft.Lib that has a reference to Rx 5.0, and which doesn't use any of Rx's UI-specific features. Then in September 2024, they add another package, FlashyLib that also has a reference to Rx 5.0 but which does use the WPF-specific features. In October 2024, a new ProblemSoft.Lib package ships with a feature that they need. As it happens, it has two targets: net6.0 and net8.0. The net6.0 version continues to depend on Rx 5.0, but the net8.0 version actually depends on Rx 7.0, but this doesn't affect the app because it's using .NET 6.0, so they are still using Rx 5.0 indirectly.

Let's say we released Rx 7 in February 2024. This doesn't affect the application authors at the time, because their application won't exist for another six months. It also doesn't affect them when they start writing it in August, because they're on Rx 5. In November 2024, .NET 9 ships. Also, .NET 6.0 goes out of support, so they try to upgrade to .NET 9.

Now let's look at how we might have made a change in Rx 7 in March 2024 some 5 months before they started work on their app in August 2024 is going to cause them to become victims of a type 2 problem. It's going to stop them from upgrading to .NET 9, or even to .NET 8.

Suppose we did the thing we really wish we could just go ahead and do: remove all the UI-framework-specific stuff from System.Reactive. And we did that in February 2024 (in this hypothetical.)

The app authors in this example (who, don't forget, started work later, in August 2024) find themselves in a type 2 situation when they try to upgrade to .NET 9.0. Just to remind you of my definition:

type 2: we put the app authors in a position where it's not possible for them to get everything working again

Because they upgraded to .NET 9.0, they're now getting the net8.0 target of ProblemSoft.Lib, which wants Rx 7. The build system will upgrade their app to that, and because package versions unify, that means that FlashyLib also gets upgraded to Rx 7.

But FlashyLib now won't work, because it was using the WPF functionality. And in this hypothetical scenario, we removed that in Rx 7. So there now is no version of Rx that works for them. Rx 5 and Rx 6 are older than what the net8.0 target of ProblemSoft.Lib requires, so if that tries to use new members defined in that version we'll get MissingMethodException or similar at runtime. But if we use Rx 7, FlashyLib will break because it relies on Rx's WPF functionality, which (in this hypothetical) we removed in Rx 7.

They can't use either .NET 9.0 or .NET 8.0. .NET 7.0 will already be out of support at this point. Staying on .NET 6.0 will avoid these problems, but that also just went out of support, so they can't do that either. There is nothing they can do to resolve this problem.

So that demonstrates that it would be possible for us to do something to Rx in February 2024 which will cause an application to that didn't even exist until August 2024 to have a "type 2" problem in November 2024.

So although that may be a slightly convoluted example, it's an existence proof that your assumption is not guaranteed to be correct. We therefore can't in fact say that victims (a more accurate description than "target audience", I think) of a "type 2 change" can only be maintainers of existing applications. We could do things today that create a time bomb where application development started in the future is initially OK, but runs into trouble later on because of something we already did before they got started.

If this is the case, then I submit that the right TFM to introduce this fix would be any TFM that is not released right now, since by definition there can't be maintainers of existing applications on TFMs that do not exist yet.

Notice that in the scenario I just described, the "type 2" problem becomes apparent at the point where the application developer attempts to upgrade to a TFM that didn't exist at the start of the scenario.

You might be inclined to dismiss this because it's weird and contrived. All I can say is that I've encountered weirder situations than this helping people resolve problems in real applications.

The problems are always lurking somewhere deep in the dependency tree. This is the main reason I am very reluctant to remove things from the API of existing components. I've been on the receiving end of decisions to do that in the past, and it's a world of pain.

Let us say Rx v7 provides a net8.0-windows10.0.19041 with UI schedulers for WinForms and WPF, etc, and that this will be the last TFM that Rx targets with the windows suffix, i.e. we never release a net9.0-windows10.0.19041 at all and we release Rx v8 synchronized with .NET 9.0 next year with only a net9.0 TFM (plus the legacy ones, including net8.0-windows10.0.19041). What will happen to both our legacy maintainer and developer of a brand new app?

OK, so let's tweak the versions and timing of my example to align with your example:

We now have a "type 2" problem. When they upgrade to .NET 10.0, ProblemSoft.Lib v2.0 (which they've happily been using without problems for over a year) will now force them onto Rx v8. But now, FlashyLib v1.0 (which they've happily been using without problems for almost 2 years) no longer works because it was dependent on the Rx WPF features, which are not present in Rx v8. They can't downgrade ProblemSoftLib v2.0 because they needed the new feature.

It's June 2026, so they do have the option to switch back to .NET 8.0. So it's not strictly a "type 2" problem yet. But it will have become one in November when .NET 8.0 support ends.

So in this scenario, a decision taken today, followed up by the removal of a public API in December 2024 causes a type 2 problem in November 2026.

First, I think we can agree that as long as the legacy maintainer does not change his TFM, nothing will break.

If the legacy maintainers do not change the TFM, they will inevitably enter a state where they are running on an unsupported runtime. I would describe this as broken.

So no, I do not agree that nothing will break. Their application will become a security liability in November 2026 if they do not change their TFM before then.

The ultra-short support lifecycles of .NET today make this a much more difficult problem.

If .NET LTS versions had similar support lifetimes as Windows, then I'd totally accept your argument. They could leave their TFM unchanged. .NET 8.0 would remain in support for a decade. That's good enough. But unfortunately the "Long" in LTS is not very long.

It is always a deliberate choice to update the TFM

I think this is the core of where we disagree.

It's always deliberate. It's not always a choice. If you're on a runtime that's about to go of support, it takes a deliberate action to upgrade to a newer runtime. You have no choice but to take that deliberate action.

For all that I disagree with the details, I think it may be that the resolution is just to have longer timescales. So mark the types as deprecated in the next version. Then after, say, 3 years we start using NuGet package deprecation—mark the last NuGet System.Reactive drop that provided non-deprecated UI-specific features as obsolete, in order to push people into using versions of the library that will alert them to this looming problem. (E.g., in my scenario, the hope would be that this would be a wake up call for the authors of FlashyLib.) Ideally this would also give the authors of DoomedApp a heads up, although since the actual use of the problematic UI-specific features is in someone else's library I don't know if we could do that. Maybe we could somehow add stuff to the package's build folder that says "Hmm, it looks like you're using libraries might be using features that are going away" so that DoomedApp can go talk to the FlashyLib authors, or maybe find a replacement before it becomes urgent.) And then maybe another 3 years after that we can remove it.

So by about 2030 we might finally have recovered from a decision that looked like a good idea in 2018.

Beta Was this translation helpful? Give feedback.

-

Separation of concerns

From my point of view this would be best approached with multiple Core and Platform specific packages, there could even be multiple core packages to split the core up into specific functional areas such as Schedulers.
With today's DotNet system having a multitude of dll's in a project is not really much of a problem, but the overall size can be. We look to the System.Reactive packages as the root of the Reactive world in DotNet and we should have the ability to extend and alter almost any part of it to achieve the goals that we need to in order to produce the end product our customers need.
Although we expect the Core functionality to follow all the rules of the Reactive coding mechanism, sometimes we need to twist them slightly to get somewhere slightly different, therefore we need flexibility.

Branch Divide and Conquer

Perhaps the best way to start from a new base is to create a new Legacy branch within this Repository and make that the on-going patch and support base for the current state of affairs.
We can then safely clean house on this branch and take Reactive into a new direction providing a proper segmented code base.

My Reactive History

I have been using Reactive coding since 2013 and have used it in various areas; ASP.Net, WPF and a Windows Service were the first implementations used in conjunction with SignalR working with David Fowler to ensure that SignalR delivered the features I needed to use within my company's projects at the time.
10 Years ago, I started working on open-source projects some of which have disappeared others still remain. ReactiveProperty and ReactiveUI are two of the projects that still remain and have become commonly used within the DotNet ecosystem, both of which I have assisted in the development of.
System.Reactive is at the heart of every project that I work on both in the open-source arena and my working environment. High Speed data intensive systems from testing equipment, machines and condition monitoring systems with 64ks/s data rates from a multitude of devices is the core of my work all of which are delivered using Reactive mechanisms; so performance is a key factor. With the aid of Reaqtive we are able to provide a system that can work in intermittent internet connected situations rejoining our Reactive Subscriptions after a network loss.

The Future

Going forward I would like to see Interfaces (the design surface) extracted from the functionality to allow for an extendable and replaceable functionality framework. The ability to create functions that are functionally different but still operate within the Reactive mechanism is important for flexibility. I myself have come across issues within the Reactive codebase that when trying to add a reasonably small change to the built in functionality lead to having to fork the entire codebase as the 'Onion Coding Style' used just lead to another layer of dependency until eventually I realised that there was no way of making the alteration as an extension of the core without entirely rebuilding it.

ReactiveUI

Within ReactiveUI we have core functionality exposed under a series of Interfaces however the framework loading mechanism is built around enabling replaceable functionality, this allows easier support for various UI Frameworks and extensibility / alteration of functionality to meet the requirements of the end user. We have a Core package then a series of UI framework support packages to either alter or extend the core functionality. Avalonia is an example of this flexibility where they have used ReactiveUI as a base and then made extensions fitting to their UI Framework.
If would be nice if System.Reactive also had a Builder Mechanism whether automatic with a manual override or manual from the outset so that we as the end users / developers can decide what functionality we want to include and load.

Desired Core Packages

Following a similar package pattern to Bonsai

Specific Microsoft UI Implementations If Relevant

This may be better handled by a UI framework such as ReactiveUI, but then no specific UI implementation should take place in System.Reactive.xxxx only in System.Reactive.UI.xxxx ReactiveUI would then base on System.Reactive.UI.Core to extend as relevant.

NOTE these are my suggestions based upon my experiences of using Rx over the years.
We want Rx to be a big success so let's stir things up a bit and show people Rx is very much alive and kicking.
If we have to break a few things along the way, so be it, as they say you can't break what's broken, but you can still fix it.

Over the past few years there was little advancement in the DotNet Rx world and its lead people to think it's not the right way for them.
@idg10 has taken the bull by the horns and shown that Rx is still the right choice, and in my opinion the best choice.

Beta Was this translation helpful? Give feedback.

You must be logged in to vote

2 replies

-

Perhaps the best way to start from a new base is to create a new Legacy branch within this Repository and make that the on-going patch and support base for the current state of affairs.
We can then safely clean house on this branch and take Reactive into a new direction providing a proper segmented code base.

I may have misunderstood, but are you proposing what amounts to a fork? To use the "new Rx" you would take a dependency on System.Reactive.Main?

This means it's possible for a single application to end up with a reference to both "old Rx" (what we have today) and "new Rx".

We were initially contemplating such a "clean start" approach, until @akarnokd pointed out the horrific flaw in this approach in his comment at #2038 (comment)

The critical part is this: if you end up with dependencies on both old Rx and new Rx, "whose Select() will be applied on IObservable? You could add one new package reference which, unbeknownst to you, had an indirect dependency on old Rx, and all of a sudden, simple code like this:

myObservable.Where(x => x > 2)

fails to compile because the C# compiler can see two extension methods for IObservable<T> called Where (both in the same namespace), one from old Rx, and one from new Rx.

This has led me to the view that whatever solution we choose, if you end up depending on both Rx 6 and Rx 7, the Observable type has to unify to a single type. If you bifurcate by moving existing types into new assemblies without making the v7 System.Reactive forward those types to those new assemblies, you end up with this problem.

Or were you envisaging that System.Reactive would indeed become a legacy facade of type forwarders in your new design?

Beta Was this translation helpful? Give feedback.

-

System.Reactive would and should live on as a facade either as a codeless container with only shims for the legacy packages, or perhaps in addition as the host for a builder mechanism allowing the selection of function implementations for the various platforms. Quite correctly we can't remove this package or the inclusion of an older code base with an inclusion of <= Rx V6 will fail.

Beta Was this translation helpful? Give feedback.

{{actor}} deleted this content .

-

Stop me if this has already been considered, elsewhere...

A more traditional approach to breaking changes would be to mark certain APIs as [Obsolete] when new replacement APIs are added, and let them exist that way for a major version or two, before removing them completely, thus giving diligent consumers plenty of time to change over seamlessly. Obviously, we're not talking about traditional APIs here, but this approach may still be possible.

Considering the running example, System.Reactive.Concurrency.DispatcherScheduler, the entire class could simply be marked as [Obsolete], while a new copy is added to a different package, under a different namespace, say, System.Reactive.Desktop. The obsolete class could then be fully removed in, say, .NET 10.

This is of course still a breaking change, but perhaps a more acceptable one. This also requires tolerating the core issue of packaging bloat for a while longer, instead of actually solving it, but the trade-off is to alleviate the pain of the breaking change for most consumers. UILibA gets a good year or two to accommodate the breaking change, and consumers using UILibA, if they're dilligent enough to recognizenthat UILibA isn't being maintained anymore, get the same time to find an alternative.

Does this make sense? Are there any other categories of changes in play here that wouldn't be compatible with this example?

Beta Was this translation helpful? Give feedback.

You must be logged in to vote

1 reply

-

This does indeed make sense.

The only reason we hadn't initially suggested this was that it means System.Reactive would need to continue to have WPF and Windows Forms dependencies until such time as we can remove the obsolete members. And since nobody seemed to have found a workaround for the problems this was causing, this did not seem like it would solve the major problems this was causing quickly enough. I think .NET 10.0 might be a little early to be removing members—that's a fairly aggressive schedule from getting from "obsolete" to "gone". So in practice we might be stuck with the current problems for many more years.

However, now that we seem to have found a workaround, there's a lot less pressure to make changes quickly.

As a result, this approach now looks like it's probably the most promising. It enables us to retain the current package structure. It provides people with a warning of the problems. And it provides a clear path for us to get to the ideal destination state in which UI-specific features are off in separate components.

For me there are still some questions around exactly how we do it. One option is to have System.Reactive temporarily be a facade: the UI-specific stuff could be obsoleted and everything else could be moved into, say, System.Reactive.Main or whatever we chose to call it, and System.Reactive could type forward all the non-UI-specific stuff to that. The advantage of this is that it means there is a way for people to use Rx in a way that doesn't require the workaround without waiting for it all to be resolved properly.

However, I'm not sure if that's worth the complexity. It really depends on how well the workaround works out for people in practice.

Beta Was this translation helpful? Give feedback.

{{actor}} deleted this content .

-

All of these cures seem infinitely worse than the original problem of "Wow there's too many DLLs". Please don't make a mess of the Rx namespaces again, we have already been through this, Rx is already confusing enough, we know from the first time around that having "too many DLLs" caused immense user confusion and (for some reason) accusations of "bloat" because "there were too many references" - now we are proposing making it even more confusing, in order to solve a problem that is largely Annoying in nature

Are we really worried about Disk Space here? In 2023? And that's worth nuking our entire ecosystem over? I really really really want people to reconsider the pain this will cause users of Rx, especially those who are using it via intermediate libraries. I know pulling in WPF sucks when you don't need it, but any kind of proposal like this will be so much worse.

Beta Was this translation helpful? Give feedback.

You must be logged in to vote

5 replies

{{actor}} deleted this content .

-

@anaisbetts I am also not a fan of DLL proliferation, in fact the opposite. However I don't think the problem being discussed here is just a concern with disk space. For example, I have outlined in #1847 a collection of issues that are currently caused by the current approach of having a single "mono-package" for System.Reactive.

The problem is that the mono-package approach can change the API surface of Rx depending on which platform you target, e.g. by assuming the consumer wants to use WinForms and UWP just because the runtime architecture of the project happens to be net6.0-windows10.0.17763.0. I think this is confusing in general because UI frameworks will continue to come and go, and most Rx operators except ObserveOn and SubscribeOn are completely agnostic to this. Apparently several others are facing similar problems, judging by the list of open issues related to managing project dependency trees where Rx is involved.

Can you elaborate a little on why you think splitting just these platform-specific schedulers would be so much worse than the current situation? I actually would like to hear what exactly is the pain from the other side, since so far we have heard here mostly an agreement that the current state of the package is untenable.

Beta Was this translation helpful? Give feedback.

{{actor}} deleted this content .

-

Pulling in massive WPF dependencies when you don't need it isn't just "annoying" and doesn't just "suck", it's a deal-breaker for most large-scale consumers. Avalonia determined it was less painful to remove System.Reactive entirely, and re-implement all the operators they were using as internal helpers. That's a huge clue that, yes, consumers are indeed worried about disk space here in 2023.

At the end of the day, final-product complexity should carry far more weight in decision-making than design-time complexity. Having components split across multiple DLLs is not a significant barrier to entry in .NET land, not since .NET Core. The debate here is which approach offers the minimum change in design-time complexity.

Beta Was this translation helpful? Give feedback.

-

With alot of IoT devices they don't have much disk space or memory, they are designed as low cost single purpose devices and are just responsible for sending a small amount of data at high data rates, they need a tidy and efficient installation for the devices to run efficiently.
So yes the size is important in some cases, admittedly not every case has this limited resource issue.
Using Rx on these type of devices is an excellent option, but if the application is too large then more expensive hardware is required pricing the option out of the market.

Web Servers, Desktop PCs, and Mobile devices generally don't have such an issue in this modern world.

I don't believe we are talking about changing namespaces, or function names, but taking the current 'Reactive Onion' as I have described it (layer upon layer of interlocked functionality) that Rx has become and flattening it out a little into specific concerns allowing for future flexibility and adaptability to cater for the needs of the various use cases Rx has.

Beta Was this translation helpful? Give feedback.

-

I'd also say that I believe a core "growth & purpose" use case for Rx is IoT, and the new version of the Introduction to Rx book has more of a focus on IoT / event processing, than the traditional WinForms / WPF / UI examples - as these are mainly catered for by the ReactiveUI stack. Rx running in WASM (https://youtu.be/KzQ_Whn6oBA?t=27298) shows what a possible future might look like.

Beta Was this translation helpful? Give feedback.

{{actor}} deleted this content .

-

I feel that there's also a long shadow of the bad old days of figuring out which of a zillion combination of DLLs are needed for your scenario, which is not really what any version of this proposal is talking about.

The "decomposition" proposal is essentially a single base assembly that everyone could depend on (e.g. IOT/server side apps as well as UI apps) with an opt-in choice of UI tiers to include if you happen to be building a UI application on one framework or another (with 3rd parties also providing support for their UI frameworks).

And the "backwards compatibility" proposal is trying to achieve the situation where there is a legacy package that aggregated everything together so existing consumers would have the "full fat" experience they have today, up and down the dependency tree.

That legacy is inevitably going to cause anxiety and that's why this thread is so important - there are lots of competing perspectives that have to be balanced out.

In that regard, I'd make special note of the legacy support for UWP consumers; we know there are a lot out there in the real world, even as UWP is being deprecated by MS. The approach to phasing out System.Reactive support is another important consideration.

Beta Was this translation helpful? Give feedback.

-

Just as a matter of clarity, and since I didn't see this being discussed here: is there a possibility to improve the dependency resolution that ends up bringing the WPF/Winforms dlls, in such a way that doesn't happen to non-WPF/Winforms applications?
And could such a solution solve the problems raised here, backwards compatibility in particular?

Beta Was this translation helpful? Give feedback.

You must be logged in to vote

4 replies

-

is there a possibility to improve the dependency resolution that ends up bringing the WPF/Winforms dlls, in such a way that doesn't happen to non-WPF/Winforms applications?

I believe that you described pretty much the problem that is at the root of what is being discussed here. Unfortunately the answer to this question seems to be "no", at least without introducing breaking changes to the package.

Beta Was this translation helpful? Give feedback.

{{actor}} deleted this content .

-

This prompted me to review in depth the full chain of causality that leads to these DLLs being present. (I've gone and found the relevant source code in NuGet and the .NET SDK.) And as a result of this, I've made a discovery.

It appears that in .NET SDK 8.0.100 at least (and possibly older SDKs—I've not yet tested this on older ones) a workaround is available that might enable us to move forward in a different way.

It turns out that if you have an application that is using Rx.NET 6.0.0, and is afflicted by the "complete copy of WPF and Windows Forms gets included" problem, you can work around it by adding this to your application's project file:

  <PropertyGroup>
    <DisableTransitiveFrameworkReferences>true</DisableTransitiveFrameworkReferences>
  </PropertyGroup>

This prevents the unwanted framework assemblies from being copied into your output.

You still end up building against the .net6.0-windows10.0.19401 target of Rx.NET, but if you never actually use its WPF or Windows Forms functionality, then you won't encounter a problem. (And if you do use that functionality, well that means you do actually need to include them in a self-contained deployment.)

The only fly in the ointment here is that I can't see a way to include just the Windows Forms but not the WPF components. I tried this:

  <PropertyGroup>
    <DisableTransitiveFrameworkReferences>true</DisableTransitiveFrameworkReferences>
    <UseWindowsForms>true</UseWindowsForms>
    <UseWPF>false</UseWPF>
  </PropertyGroup>

but for some reason this ends up giving you all the WPF components anyway. So if you're building self-contained Windows Forms apps, this is suboptimal. But it does at least appear to offer a fix for the problem when using Avalonia.

If this proves to be an acceptable workaround, this then gives us a (multi-year) path for finally getting to the place where we want to be:

This would meet the preference that I (and @glopesdev ) have for keeping System.Reactive as the entry point for Rx.NET. It provides a path for getting all UI-framework-specific functionality out into UI-framework-specific packages, with an eventual end state of System.Reactive no longer needing to offer a -windows target.

The absolutely critical enabling element here is the availability of a viable workaround. For as long as the offending types remain in System.Reactive (even if they're marked as [Obsolete]) there will necessarily be a -windowsX.Y.Z type target for Rx, which will necessarily have a framework reference to WPF and Windows Forms. And if there were no workaround for the problems that causes, the plan outline above would not really be acceptable, because it would, in practice, be years before we could fix it.

My least favourite feature of this is that all apps with a -windows TFM using Rx.NET end up needing to apply the workaround, and that this will continue to be the case for however many years it takes us to feel that the problematic types have been [Obsolete] for long enough that it's OK to remove them. Given that Rx.NET has a System. prefix (meaning people often perceive it as being somehow an 'official' component, an impression reinforced by Rx.NET 5 claiming to be part of the ".NET 5.0 wave") we can't move quickly on that, so I could see us wanting to wait 5 years before removing the types. That's a long time for a workaround to be required as part of "normal" usage. Perhaps we'd want to have System.Reactive detect when you've ended up with the -windows target, and to issue a warning unless it can see that you've either set <DisableTransitiveFrameworkReferences> or you've got a reference to System.Reactive.Integration.Wpf or System.Reactive.Integration.WindowsForms.

For this reason, I think that even with this workaround we should consider a System.Reactive.Base component enabling components or apps to indicate completely unambiguously that they have no intention of using the WPF/WinForms bits, and for System.Reactive to become a bunch of type forwarders to that (but for System.Reactive to remain the primary point of entry for most Rx users).

Beta Was this translation helpful? Give feedback.

-

The inclusion of Wpf with Winforms may be due to some containers for Wpf included in Winforms, but I don't believe that you get all the dependencies of Wpf just the ones required for the container.

Beta Was this translation helpful? Give feedback.

-

I want to add one other option we've thought about, and which might "work" but which we're reluctant to use because we believe it'd be completely unsupported.

It would be technically possible for us to build a NuGet package for System.Reactive which contains a net8.0 target and which did NOT contain a net8.0-windows... target, but where the DLL in the net8.0 folder contained a DLL that did in fact have references to WPF and Windows Forms assemblies. And if you do this, then as long as you never attempt to use any of the WPF/WinForms features, you'd never know those references are there. The attraction of this is that if you do happen to be running in an application that has specified, say, <UseWPF>true</UseWPF>, then you will be able to use the WPF-specific features in this System.Reactive.

The benefit of this is that because from a NuGet packaging perspective, there's no apparent framework reference to the desktop frameworks, you don't get all the WPF/WinForms DLLs incorporated in your self-contained deployment just because you happened to add a reference to Rx.NET, but if you are in fact using those frameworks, it all works like you'd want. (And you get reasonably informative errors. If you try to use the relevant Rx.NET types in an app where you've not explicitly opted into using the relevant frameworks, you get a compiler error telling you that you're trying to use a, say, Windows Forms type without having a suitable assembly reference. And if you try to rely on the feature at runtime, you get a FileNotFoundException when it tries to load the relevant assembly.)

The massive downside is that this is a bit of a nasty hack and, as far as I know, completely unsupported.

Our experience with the problems of trying to use UWP in an unsupported way (i.e., using it in conjunction with the modern .NET project system) is that hacking the build to make it do things that aren't supported causes grief. For that reason, I'd be quite reluctant to go down this path.

Beta Was this translation helpful? Give feedback.

You must be logged in to vote

1 reply

-

This is something that I had also thought about with the potential use of Source Generators to provide a variation of output, I haven't tested this ability but I can see the potential of it working.

Beta Was this translation helpful? Give feedback.

-

I get the reluctancy to break backwards compatibility, but sometimes you need to nuke an egg to make a pizza. WPF, and other UI frameworks are "on top of" the .net runtime and BCL, same goes for ASP, and Entity Framework. I would like to see a splitting, where there is a base that has zero dependencies outside the runtime, then anything that hooks into anything else, you make a new nuget. So its something like this for "tie-in" nugets:

*.AspNetCore.Extensions
*.EfCore.EventedQueriable

Beta Was this translation helpful? Give feedback.

You must be logged in to vote

1 reply

-

A disruptive change is worth it if it solves the current problem and meets future plans, it also helps to ship new features and bug fixes faster.
But there needs to be documentation to help old versions update to new versions

Beta Was this translation helpful? Give feedback.

You must be logged in to vote

1 reply

-

From my point of view as someone who works in building libraries at my current company I really would like the team to consider that a major update is a major update and that breaking changes are acceptable.

In my oppinion it really isn't a great flow when we don't get thing done properly just cause of "we want to make it backwards compatible". However I undrestand the reasons behind it. Just stating my oppinion that I would be fine if we simply have to do steps "A, B, C" if I update.

Angular dose it in such a way that they have a guide when updating to a major release where you have a list of checkmarks. I think the approach is pretty good, I never had a issue so far approching things this way.

Beta Was this translation helpful? Give feedback.

You must be logged in to vote

1 reply

-

Angular isn't a great model because that's a framework, and as such tends to be a top-down sort of a choice. In Rx you can get a situation where you use multiple libraries all depending on slightly different versions of Rx without you even knowing they were using it.

You don't become an Angular application as a side effect of one of the libraries you're using taking a sneaky dependency on Angular! But that's exactly how you can end up as an Rx user. And really, it's the dependency scenarios that make this tricky. If we only had to care about applications making explicit decisions to use Rx, this would be a whole lot simpler.

When a new version of Angular comes out, an application developer will decide if/when to take that change, and will typically accept that this is going to involve some planning and a coordinated effort to update the app. That's because it's an application framework. Rx really isn't. You don't write an Rx app.

Beta Was this translation helpful? Give feedback.

-

As I've worked to write up the proposed approach and build a prototype implementation of it, I have made a slightly surprising discovering: #205 was the original fix for the plug-in conflict problem, but there was a regression in Rx 5.0!

I've been building various test harnesses to verify that whatever packaging changes we ultimately make do not cause regressions for any of the various problems that earlier packaging changes were trying to fix, which is how I discovered that Rx 5.0 did in fact cause exactly that kind of regression. Here's the sequence that will illustrate the problem:

  1. Plug-in 1 is built for net471 using System.Reactive v5.0. It will have a copy of the netstandard2.0 version of System.Reactive.dll
  2. Plug-in 2 is built for net472 using System.Reactive v5.0. It will have a copy of the net472 version of System.Reactive.dll
  3. A .NET 4.8.2 host process loads plug-in 1 first then later loads plug-in 2
  4. When plug-in 1 first tries to use an Rx type, the assembly resolver will load its copy of System.Reactive.dll (the netstandard2.0 build). It will be happy because that's the one it was expecting.
  5. When plug-in 2 first tries to use an Rx type, the assembly resolver will say "Hey, I already loaded System.Reactive.dll version 5.0.0.0, so I don't even need to look for it in your folder" so plug-in 2 will get the DLL from plug-in 1's folder, which is the netstandard2.0 copy
  6. If plug-in 2 tries to use any Rx.NET features that are in the net472 version but not the netstandard2.0 version (e.g., WPF support), it will crash with a TypeLoadException or a MissingMethodException

This came as something of a surprise because the plug-in conflict issue was apparently a big deal at the time. I was baffled when I realised this regression occurred over 3 years ago but apparently nobody noticed this time.

I suspect this is because there are some things that are different this time. There's a straightforward workaround if a maintainer of an old plug-in hits this issue when upgrading to Rx 5.0: don't upgrade! They can stay on Rx 4.4.1, which does not have this problem. Most of the new work in Rx 5.0 was in support of newer frameworks, so legacy .NET Framework plug-ins almost certainly don't need to upgrade. Also, it only occurs when a plug-in uses a version of .NET that is older than 4.7.2. That shipped about 6 years ago, so in practice nobody's going to be building new plug-ins that have this issue. (So if you really need a later version of Rx you just need to make sure your plug-in targets net472 or later.) So in practice nobody runs into it. Compare that to the Rx 3.0 days, where you ran into this by default and there was no workaround.

The reason Rx 5.0 caused this regression is partly because in the Great Unification of Rx 4.0, the fix applied for Rx 3.1 (weird assembly version numbers) was abandoned. However, more by luck than judgement Rx 4.0 happened not to be susceptible to this bug. It only recurred when Rx 5.0 changed the TFMs. Specifically:

Here's the critical difference: with Rx 4.4.1, if you were targetting ANY version of .NET Framework, there was no way you could possibly load the netstandard2.0 version. The oldest version of .NET Framework that supported netstandard2.0 was .NET 4.6.1, and the build system considers the net46 target a better match than netstandard2.0 for NET >=4.6. So there's no version of .NET Framework you could target that would cause the build system to select the netstandard2.0 build of System.Reactive.dll.

But in Rx 5.0, that's not true. If you write a plug-in that targets net462, it can't use the net472 DLL. The only viable match is netstandard2.0 so it uses that. But any plug-in targetting net472 or later will pick the net472 DLL. This means that with Rx 5.0 it is possible for two different .NET Framework plug-ins to want different DLLs from Rx 5.0. That wasn't possible with Rx 4.X.

I think the only way to avoid this would be for System.Reactive not to offer a net472 version. Obviously that's completely impossible with the current design in which the Windows Forms and WPF features are built into System.Reactive, but this is yet another motivation for factoring those out into separate components. What's less clear is whether we could in fact remove the net472 target and have the netstandard2.0 target be the only one used on .NET FX. I believe it would be workable once we've finally removed the WPF and Windows Forms types, but it might still have a problem: it's possible that some schedulers would have suboptimal performance because a specialized net472 version might be able to do better than a lowest common denominator netstandard2.0 version.

I think the answer to that might be to continue to offer netstandard2.0 and net472 but to ensure that both have identical API surface areas. That way the worst case scenario is that in a plug-in situation you might get a slightly suboptimal scheduler if some other plug-in happened to cause the netstandard2.0 version to load. This wouldn't break anything, it could only possibly occur in plug-in scenarios, and is very unlikely to happen in practice because by the time we fully remove the UI types (2030?) nobody in their right minds should be writing a plug-in that uses the latest version of Rx and yet which targets a version of .NET FX older than 4.7.2. (And even if they did it wouldn't break. It just gives less than optimal perf in that very specific and unlikely scenario.)

Beta Was this translation helpful? Give feedback.

You must be logged in to vote

0 replies

-

We now have two significant PRs. They are currently in draft because no final decisions have been made. We want to put them out for public review for now:

We expect everyone to hate the names of the new packages, so if you have any good ideas, please let us know. But bear in mind that some of the good names are already taken by the existing façade packages, and we have good reasons not to want to make those packages serve two unrelated purposes.

A set of packages built in the proposed way is available from the feed in the Azure DevOps project for Rx: https://dev.azure.com/dotnet/Rx.NET/_artifacts/feed/RxNet (which should be accessible on https://pkgs.dev.azure.com/dotnet/Rx.NET/_packaging/RxNet/nuget/v3/index.json for NuGet)

The packages all have version 7.0.0-preview.1.ge28289bcbc

We aren't planning to push even preview builds of these out to NuGet until we've got consensus on the approach.

Beta Was this translation helpful? Give feedback.

You must be logged in to vote

9 replies

-

Hmm. I can't repro those errors - if I use Observable.ObserveOn in a WinUI project, it only appears to find the overloads taking an IScheduler or a SynchronizationContext. I'm not able to provoke those particular errors.

That said, I think this issue of there being two versions of WindowsBase might actually prove to be a showstopper. (The workaround really only affected the later stages of the build, preventing WPF and Windows Forms from being packaged into your app. This causes problems earlier on in the build, and I don't think there's anything we can do to resolve it.)

So I think we might end up having to turn System.Reactive into a type forwarding facade, and make a new NuGet package with just the non-UI-framework-specific Rx bits in after all. Sigh.

I apologise for not replying to this earlier. Somehow I never saw this thread until you referred me to it in that other discussion.

Beta Was this translation helpful? Give feedback.

-

OK, I've finally found the time to look at this. I think there are a few things going on.

This particular message is just my attempt to collect my understanding of what your example shows. It does not offer any solutions. This just feeds into the set of things we need to consider in the design of any solution (but it might well rule out the workaround proposal. Something your example reveals shows that we might have no option but to do the full package split after all.)

This code in Windows.xaml.cs seems to be what makes the issues in Rx here become a showstopper:

Observable.Interval(TimeSpan.FromSeconds(1))
    .ObserveOn(DispatcherQueueScheduler.Current)
    .Subscribe(_ => { });

In particular, it's the middle line, the use of the extension method, ObserveOn. The intent of that code is that you want to use the overload that takes an IScheduler. So the intent is that it's equivalent to this more verbose code:

IObservable<long> interval = Observable.Interval(TimeSpan.FromSeconds(1));
IScheduler scheduler = DispatcherQueueScheduler.Current;
IObservable<long> intervalOnDispatcher = Observable.ObserveOn(interval, scheduler);
intervalOnDispatcher.Subscribe(_ => { });

(Apologies to those who prefer var, but the precise static types are highly relevant to the problem, so it's easier to understand what's happening when you can see exactly what those types are.)

This actually compiles without error (although we still get warnings from NuGet). And the reason for that is that we've told the compiler exactly which static method we want. If we replace the 3rd line with this:

IObservable<long> intervalOnDispatcher = interval.ObserveOn(scheduler);

That's when we get these errors:

1>C:\dev\temp\rxrepros\kmgallahan\ReactiveWinUITests\ReactiveWinUIAppTest\MainWindow.xaml.cs(47,50,47,68): error CS0012: The type 'Control' is defined in an assembly that is not referenced. You must add a reference to assembly 'System.Windows.Forms, Version=6.0.2.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.
1>C:\dev\temp\rxrepros\kmgallahan\ReactiveWinUITests\ReactiveWinUIAppTest\MainWindow.xaml.cs(47,50,47,68): error CS7069: Reference to type 'Dispatcher' claims it is defined in 'WindowsBase', but it could not be found
1>C:\dev\temp\rxrepros\kmgallahan\ReactiveWinUITests\ReactiveWinUIAppTest\MainWindow.xaml.cs(47,50,47,68): error CS7069: Reference to type 'DispatcherObject' claims it is defined in 'WindowsBase', but it could not be found

The proximate cause here is that by using an extension method, we've forced the C# compiler to consider every ObserveOn extension method defined by every static class in a namespace for which there is a using directive in effect. The working example called Observable.ObserveOn directly (meaning the compiler only had to look at the Observable class). But using the extension method syntax means it now also has to look at ControlObservable and DispatcherObservable, because those types are in System.Reactive.Linq, and this source file has using System.Reactive.Linq; at the top.

Since we forced it to look at all the ObserveOn extension methods defined by those types, the C# compiler now has a problem: it can't actually understand those methods, because they refer to types that appear to be unavailable. The exact nature of this problem is slightly different for the first error vs the other two errors.

The problem described by the first error is simply that ControlObservable.ObserveOn refers to System.Windows.Forms.Control. That type that is unavailable in this project because this is not a Windows Forms project.

The problem described by the next two errors is a bit more subtle. The DispatcherObservable class includes ObserveOn extension methods referring to Dispatcher and DispatcherObject. These are somewhat different than the Windows Forms issue, in that these are defined in WindowsBase.dll, an assembly that we do in fact have a reference to in this project. However, it turns out that WindowsBase.dll is weird: the .NET SDK supplies reference assemblies for v4.0.0.0 of WindowsBase.dll, but this has a drastically reduced surface area.

On my system with SDK 9.0.300 installed, I find that for your example project (which targets .NET 8.0) this reference assembly is at C:\Program Files\dotnet\packs\Microsoft.NETCore.App.Ref\8.0.16\ref\net8.0\WindowsBase.dll, and if you open that up in ILDASM, you'll see it defines no types of its own. It contains only type forwarders, and it seems to cover four areas of functionality:

This is a fairly small subset of all the things that older versions of WindowsBase.dll cover. WindowsBase.dll started out as a WPF thing, but because WPF was initially conceived of as a core OS component, this ended up being home to a bunch of slightly more general-purpose things, and I guess the idea with the stripped-down WindowsBase.dll v4.0.0.0 is that it holds just the general-purpose bits that are supported in modern .NET.

However, later versions of WindowsBase are made available in Microsoft.WindowsDesktop.App (e.g., with SDK 9.0.300 installed, I see one at "C:\Program Files\dotnet\packs\Microsoft.WindowsDesktop.App.Ref\8.0.16\ref\net8.0\WindowsBase.dll"), and these include all the WPFisms such as Dispatcher, which is the type that Rx's DispatcherObservable expects to exist in WindowsBase.dll.

So we've got this weird situation where v4.0.0.0 of WindowsBase reduces the API surface area, but subsequent versions put it all back!

A problem with depending on a later, fully-featured version of WindowsBase is presumably that it puts you back into a situation of dependence on WPF. Rx.NET tries to have it both ways: it builds against later versions of WindowsBase (5.0 or 6.0 depending on which Rx version you use) but doesn't expose this dependency to the outside world: it will tolerate WindowsBase 4.0, but this leads to failures if you try to use anything in Rx.NET that depends on the 5.0+ feature set of WindowsBase.

But this isn't really a tenable situation because application code can find itself causing a compile-time dependency on WPFisms in an app that doesn't target WPF. The nature of extension methods mean that source code that never meant to depend on WPF (and which would produce no runtime dependency on WPF, because it actually compiles down to one of the overloads that doesn't use WPF) can cause a compile time dependency on WPF that will cause the build to fail because the compiler is unable to understand the Rx.NET code in question.

There's one particularly important conclusion:

If the -windows TFM for Rx.NET includes Windows Forms support, any project that tries to use that TFM will encounter this error when trying to use ObserveOn. Likewise, if this TFM of Rx.NET includes any WPF support, any project that tries to use this TFM will also encounter errors when trying to use ObserveOn.

Not for the first time with Rx.NET's packaging problems, extension methods rule out certain strategies. (They are the reason the oft-suggested "clean break" approach doesn't work.)

At one point, Rx.NET was actually attempting to work around this problem but as far as I can tell, this didn't actually work. Up to and including Rx 5.0.0, we had a .targets file that was meant to give you the netstandard2.0 build on .NET Core/.NET 5, unless either UseWPF or UseWindowsForms was set. That was introduced back with .NET Core 3.0, at a time before OS-specific TFMs existed. The logic was still in there in Rx 5.0.0 even though OS-specific TFMs did exist by that time, meaning in theory that even when your app has a -windows TFM, you would not get the -windows version of Rx unless you had set either UseWPF or UseWindowsForms.

In practice, that didn't actually appear to work, because AvaloniaUI/Avalonia#9549 shows that they pick up the -windows build in a console app that does nothing more than add a reference to Avalonia.Direct2D1. It also doesn't help with your example: if I downgrade the Rx reference to 5.0.0 (which still contains that logic that is supposed to detect when you didn't ask for WPF or Windows Forms, and to give you the netstandard2.0 version), the compiler errors still occur, indicating that the build system is in fact supplying the compiler with the net5.0-windows... version of Rx, despite the logic in the build/System.Reactive.targets file that's suppose to avoid that.

(You suggest that maybe adding similar UsesWinUI detection might help, but I'm fairly sure it wouldn't: the intent of that targets file was to use the netstandard2.0 unless you had either UseWPF or UsesWindowsForms set. If that had worked as intended in Rx 5.0.0, then your example would work fine with Rx 5.0.0. But it doesn't, because that trick in the targets file appears not to have worked as intended.)

Since a) this didn't work as it was meant to and b) by May 2024 there were no versions of .NET still in support that did not understand OS-specific TFMs, I removed that build/System.Reactive.targets file. I'm fairly sure that changed nothing because, as just described, it didn't appear to do what it was supposed to: a non-WPF, non-Windows-Forms app targeting net8.0-windows10.0.19041 using Rx 5.0 would end up with the net5.0-windows10.0.19041 rx target and not, as that .targets file apparently intended, the netstandard2.0 target.

Beta Was this translation helpful? Give feedback.

-

Something your example reveals shows that we might have no option but to do the full package split after all.

@idg10 is this option of doing a full package split already part of the existing PR, or documented briefly elsewhere?

There has been so much written about this topic over the last couple of years that I for once have started to lose track of the current state of things. More than the ADR, I think having the shortest distillation possible of what the current proposal is shaping up to be would be invaluable.

I think most agree with everything you say, and also feel there is no option to go back to having a full split between Rx schedulers and the rest of the framework. I believe the only bit left was naming, which I believe remains important to avoid having another round of regrets later on.

Beta Was this translation helpful? Give feedback.

-

@idg10 Thanks for taking the time to look into these errors.

I do not have much more to add other than my full support for a break to dump as much legacy cruft as possible.

When compared to R3 Rx.NET appears to be stuck in a quagmire. My main issue there is lack of DynamicData support (issue).

Beta Was this translation helpful? Give feedback.

-

Anyone following this discussion might want to know that we are now planning to move forward. Please see #2177 for details.

Beta Was this translation helpful? Give feedback.

You must be logged in to vote

1 reply

-

@idg10 In the spirit of summarising these very long-winded discussions, it would be useful to include a very high-level summary with the executive decisions taken: specifically the exact names of the packages should be made more clear for everyone to comment, since we will all be living with the aftermath of this for the foreseeable future.

For the record, I still don't like the names, and in general I find inertia to be a poor judge of quality. I will take some time this week to think of alternative ways forward.

Beta Was this translation helpful? Give feedback.

-

See: #2211 for the latest progress.

Beta Was this translation helpful? Give feedback.

You must be logged in to vote

0 replies

-

Beta Was this translation helpful? Give feedback.

You must be logged in to vote

0 replies

Heading

Bold

Italic

Quote

Code

Link

Numbered list

Unordered list

Task list

Attach files

Mention

Reference

Menu reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji

You can’t perform that action at this time.


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4