Discussion:
more releases
Richard Eisenberg
2015-09-01 06:45:40 UTC
Permalink
Hi devs,

An interesting topic came up over dinner tonight: what if GHC made more releases? As an extreme example, we could release a new point version every time a bug fix gets merged to the stable branch. This may be a terrible idea. But what's stopping us from doing so?

The biggest objection I can see is that we would want to make sure that users' code would work with the new version. Could the Stackage crew help us with this? If they run their nightly build with a release candidate and diff against the prior results, we would get a pretty accurate sense of whether the bugfix is good. If this test succeeds, why not release? Would it be hard to automate the packaging/posting process?

The advantage to more releases is that it gets bugfixes in more hands sooner. What are the disadvantages?

Richard

PS: I'm not 100% sold on this idea. But I thought it was interesting enough to raise a broader discussion.
Michael Snoyman
2015-09-01 06:53:00 UTC
Permalink
It's definitely an interesting idea. From the Stackage side: I'm happy to
provide testing and, even better, support to get some automated Stackage
testing tied into the GHC release process. (Why not be more aggressive? We
could do some CI against Stackage from the 7.10 branch on a regular basis.)

I like the idea of getting bug fixes out to users more frequently, so I'm
definitely +1 on the discussion. Let me play devil's advocate though:
having a large number of versions of GHC out there can make it difficult
for library authors, package curators, and large open source projects, due
to variety of what people are using. If we end up in a world where
virtually everyone ends up on the latest point release in a short
timeframe, the problem is reduced, but most of our current installation
methods are not amenable to that. We need to have a serious discussion
about how Linux distros, Haskell Platform, minimal installers, and so on
would address this shift. (stack would be able to adapt to this easily
since it can download new GHCs as needed, but users may not like having
100MB installs on a daily basis ;).)

What I would love to see is that bug fixes are regularly backported to the
stable GHC release and that within a reasonable timeframe are released,
where reasonable is some value we can discuss and come to consensus on.
I'll say that at the extremes: I think a week is far too short, and a year
is far too long.
Post by Richard Eisenberg
Hi devs,
An interesting topic came up over dinner tonight: what if GHC made more
releases? As an extreme example, we could release a new point version every
time a bug fix gets merged to the stable branch. This may be a terrible
idea. But what's stopping us from doing so?
The biggest objection I can see is that we would want to make sure that
users' code would work with the new version. Could the Stackage crew help
us with this? If they run their nightly build with a release candidate and
diff against the prior results, we would get a pretty accurate sense of
whether the bugfix is good. If this test succeeds, why not release? Would
it be hard to automate the packaging/posting process?
The advantage to more releases is that it gets bugfixes in more hands
sooner. What are the disadvantages?
Richard
PS: I'm not 100% sold on this idea. But I thought it was interesting
enough to raise a broader discussion.
_______________________________________________
ghc-devs mailing list
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
Stephen Paul Weber
2015-09-05 20:06:53 UTC
Permalink
Post by Michael Snoyman
having a large number of versions of GHC out there can make it difficult
for library authors, package curators, and large open source projects, due
to variety of what people are using.
For point releases, if we do it right, this *should* not happen, since the
changes *should* be backwards-compatible and so testing against the oldest
release on the current major version *should* mean all subsequent point
releases work as well. IMHO, any violation of this assumption *should* be
considered a (serious) bug.
Herbert Valerio Riedel
2015-09-01 07:01:55 UTC
Permalink
Post by Richard Eisenberg
An interesting topic came up over dinner tonight: what if GHC made
more releases? As an extreme example, we could release a new point
version every time a bug fix gets merged to the stable branch. This
may be a terrible idea. But what's stopping us from doing so?
The biggest objection I can see is that we would want to make sure
that users' code would work with the new version. Could the Stackage
crew help us with this? If they run their nightly build with a release
candidate and diff against the prior results, we would get a pretty
accurate sense of whether the bugfix is good. If this test succeeds,
why not release? Would it be hard to automate the packaging/posting
process?
The advantage to more releases is that it gets bugfixes in more hands
sooner. What are the disadvantages?
I'd say mostly organisational overhead which can't be fully automated
(afaik, Ben has already automated large parts but not everything can be):

- Coordinating with people creating and testing the bindists
- Writing releases notes & announcment
- Coordinating with the HP release process (which requires separate QA)
- If bundled core-libraries are affected, coordination overhead with package
maintainers (unless GHC HQ owned), verifying version bumps (API diff!) and
changelogs have been updated accordingly, uploading to Hackage
- Uploading and signing packagees to download.haskell.org, and verifying
the downloads

Austin & Ben probably have more to add to this list

That said, doing more stable point releases is certainly doable if the
bugs fixed are critical enough. This is mostly a trade-off between time
spent on getting GHC HEAD in shape for the next major release (whose
release-schedules suffer from time delays anyway) vs. maintaining a
stable branch.

Cheers,
hvr
Richard Eisenberg
2015-09-01 07:12:21 UTC
Permalink
Post by Herbert Valerio Riedel
I'd say mostly organisational overhead which can't be fully automated
- Coordinating with people creating and testing the bindists
This was the sort of thing I thought could be automated. I'm picturing a system where Austin/Ben hits a button and everything whirs to life, creating, testing, and posting bindists, with no people involved.
Post by Herbert Valerio Riedel
- Writing releases notes & announcment
Release notes should, theoretically, be updated with the patches. Announcement can be automated.
Post by Herbert Valerio Riedel
- Coordinating with the HP release process (which requires separate QA)
I'm sure others will have opinions here, but I guess I was thinking that the HP wouldn't be involved. These tiny releases could even be called something like "7.10.2 build 18". The HP would get updated only when we go to 7.10.3. Maybe we even have a binary compatibility requirement between tiny releases -- no interface file changes! Then a user's package library doesn't have to be recompiled when updating. In theory, other than the bugfixes, two people with different "builds" of GHC should have the same experience.
Post by Herbert Valerio Riedel
- If bundled core-libraries are affected, coordination overhead with package
maintainers (unless GHC HQ owned), verifying version bumps (API diff!) and
changelogs have been updated accordingly, uploading to Hackage
Any library version change would require a more proper release. Do these libraries tend to change during a major release cycle?
Post by Herbert Valerio Riedel
- Uploading and signing packagees to download.haskell.org, and verifying
the downloads
This isn't automated?
Post by Herbert Valerio Riedel
Austin & Ben probably have more to add to this list
I'm sure they do.

Again, I'd be fine if the answer from the community is "it's just not what we need". But I wanted to see if there were technical/practical/social reasons why this was or wasn't a good idea. If we do think it's a good idea absent those reasons, then we can work on addressing those concerns.

Richard
Post by Herbert Valerio Riedel
That said, doing more stable point releases is certainly doable if the
bugs fixed are critical enough. This is mostly a trade-off between time
spent on getting GHC HEAD in shape for the next major release (whose
release-schedules suffer from time delays anyway) vs. maintaining a
stable branch.
Cheers,
hvr
Ben Gamari
2015-09-02 10:43:57 UTC
Permalink
Post by Richard Eisenberg
Post by Herbert Valerio Riedel
I'd say mostly organisational overhead which can't be fully automated
- Coordinating with people creating and testing the bindists
This was the sort of thing I thought could be automated. I'm picturing
a system where Austin/Ben hits a button and everything whirs to life,
creating, testing, and posting bindists, with no people involved.
I can nearly do this for Linux with my existing tools. I can do 32- and
64-bit builds for both RedHat and Debian all on a single
Debian 8 machine with the tools I developed during the course of the
7.10.2 release [1].

Windows is unfortunately still a challenge. I did the 7.10.2 builds on
an EC2 instance and the experience wasn't terribly fun. I would love for
this to be further automated but I've not done this yet.
Post by Richard Eisenberg
Post by Herbert Valerio Riedel
- Writing releases notes & announcment
Release notes should, theoretically, be updated with the patches.
Announcement can be automated.
If I'm doing my job well the release notes shouldn't be a problem. I've
been trying to be meticulous about ensuring that all new features come
with acceptable release notes.
Post by Richard Eisenberg
Post by Herbert Valerio Riedel
- If bundled core-libraries are affected, coordination overhead with package
maintainers (unless GHC HQ owned), verifying version bumps (API diff!) and
changelogs have been updated accordingly, uploading to Hackage
Any library version change would require a more proper release. Do
these libraries tend to change during a major release cycle?
The core libraries are perhaps the trickiest part of this. Currently the
process goes something like this,

1. We branch off a stable GHC release
2. Development continues on `master`, eventually a breaking change is
merged to one of the libraries
3. Eventually someone notices and bumps the library's version
4. More breaking changes are merged to the library
5. We branch off for another stable release, right before the release
we manually push the libraries to Hackage
6. Repeat from (2)

There can potentially be a lot of interface churn between steps 3 and 5.
If we did releases in this period we would need to be much more careful
about library versioning. I suspect this may end up being quite a bit of
work to do properly.

Technically we could punt on this problem and just do the same sort of
stable/unstable versioning for the libraries that we already do with GHC
itself. This would mean, however, that we couldn't upload the libraries
to Hackage.
Post by Richard Eisenberg
Post by Herbert Valerio Riedel
- Uploading and signing packagees to download.haskell.org, and verifying
the downloads
This isn't automated?
It is now (see [2]). This shouldn't be a problem.
Post by Richard Eisenberg
Post by Herbert Valerio Riedel
Austin & Ben probably have more to add to this list
I'm sure they do.
Again, I'd be fine if the answer from the community is "it's just not
what we need". But I wanted to see if there were
technical/practical/social reasons why this was or wasn't a good idea.
If we do think it's a good idea absent those reasons, then we can work
on addressing those concerns.
Technically I think there are no reasons why this isn't feasible with
some investment. Exactly how much investment depends upon what
exactly we want to achieve,

* How often do we make these releases?
* Which platforms do we support?
* How carefully do we version included libraries?

If we focus solely on Linux and punt on the library versioning issue I
would say this wouldn't even difficult. I could easily setup my build
machine to do a nightly bindist and push it to a server somewhere.
Austin has also mentioned that Harbormaster builds could potentially
produce bindists.

The question is whether users want more rapid releases. Those working on
GHC will use their own builds. Most users want something reasonably
stable (in both the interface sense and the reliability sense) and
therefore I suspect would stick with the releases. This leaves a
relatively small number of potential users; namely those who want to
play around with unreleased features yet aren't willing to do their own
builds.

Cheers,

- Ben


[1] https://github.com/bgamari/ghc-utils
[2] https://github.com/bgamari/ghc-utils/blob/master/rel-eng/upload.sh
Herbert Valerio Riedel
2015-09-02 10:49:32 UTC
Permalink
On 2015-09-02 at 12:43:57 +0200, Ben Gamari wrote:

[...]
Post by Ben Gamari
The question is whether users want more rapid releases. Those working on
GHC will use their own builds. Most users want something reasonably
stable (in both the interface sense and the reliability sense) and
therefore I suspect would stick with the releases. This leaves a
relatively small number of potential users; namely those who want to
play around with unreleased features yet aren't willing to do their own
builds.
Btw, for those who are willing to use Ubuntu there's already GHC HEAD
builds available in my PPA, and I can easily keep creating GHC 7.10.3
snapshots in the same style like I usually do shortly before a stable
point-release.
Richard Eisenberg
2015-09-02 15:44:15 UTC
Permalink
I think some of my idea was misunderstood here: my goal was to have quick releases only from the stable branch. The goal would not be to release the new and shiny, but instead to get bugfixes out to users quicker. The new and shiny (master) would remain as it is now. In other words: more users would be affected by this change than just the vanguard.

Richard
Post by Ben Gamari
Post by Richard Eisenberg
Post by Herbert Valerio Riedel
I'd say mostly organisational overhead which can't be fully automated
- Coordinating with people creating and testing the bindists
This was the sort of thing I thought could be automated. I'm picturing
a system where Austin/Ben hits a button and everything whirs to life,
creating, testing, and posting bindists, with no people involved.
I can nearly do this for Linux with my existing tools. I can do 32- and
64-bit builds for both RedHat and Debian all on a single
Debian 8 machine with the tools I developed during the course of the
7.10.2 release [1].
Windows is unfortunately still a challenge. I did the 7.10.2 builds on
an EC2 instance and the experience wasn't terribly fun. I would love for
this to be further automated but I've not done this yet.
Post by Richard Eisenberg
Post by Herbert Valerio Riedel
- Writing releases notes & announcment
Release notes should, theoretically, be updated with the patches.
Announcement can be automated.
If I'm doing my job well the release notes shouldn't be a problem. I've
been trying to be meticulous about ensuring that all new features come
with acceptable release notes.
Post by Richard Eisenberg
Post by Herbert Valerio Riedel
- If bundled core-libraries are affected, coordination overhead with package
maintainers (unless GHC HQ owned), verifying version bumps (API diff!) and
changelogs have been updated accordingly, uploading to Hackage
Any library version change would require a more proper release. Do
these libraries tend to change during a major release cycle?
The core libraries are perhaps the trickiest part of this. Currently the
process goes something like this,
1. We branch off a stable GHC release
2. Development continues on `master`, eventually a breaking change is
merged to one of the libraries
3. Eventually someone notices and bumps the library's version
4. More breaking changes are merged to the library
5. We branch off for another stable release, right before the release
we manually push the libraries to Hackage
6. Repeat from (2)
There can potentially be a lot of interface churn between steps 3 and 5.
If we did releases in this period we would need to be much more careful
about library versioning. I suspect this may end up being quite a bit of
work to do properly.
Technically we could punt on this problem and just do the same sort of
stable/unstable versioning for the libraries that we already do with GHC
itself. This would mean, however, that we couldn't upload the libraries
to Hackage.
Post by Richard Eisenberg
Post by Herbert Valerio Riedel
- Uploading and signing packagees to download.haskell.org, and verifying
the downloads
This isn't automated?
It is now (see [2]). This shouldn't be a problem.
Post by Richard Eisenberg
Post by Herbert Valerio Riedel
Austin & Ben probably have more to add to this list
I'm sure they do.
Again, I'd be fine if the answer from the community is "it's just not
what we need". But I wanted to see if there were
technical/practical/social reasons why this was or wasn't a good idea.
If we do think it's a good idea absent those reasons, then we can work
on addressing those concerns.
Technically I think there are no reasons why this isn't feasible with
some investment. Exactly how much investment depends upon what
exactly we want to achieve,
* How often do we make these releases?
* Which platforms do we support?
* How carefully do we version included libraries?
If we focus solely on Linux and punt on the library versioning issue I
would say this wouldn't even difficult. I could easily setup my build
machine to do a nightly bindist and push it to a server somewhere.
Austin has also mentioned that Harbormaster builds could potentially
produce bindists.
The question is whether users want more rapid releases. Those working on
GHC will use their own builds. Most users want something reasonably
stable (in both the interface sense and the reliability sense) and
therefore I suspect would stick with the releases. This leaves a
relatively small number of potential users; namely those who want to
play around with unreleased features yet aren't willing to do their own
builds.
Cheers,
- Ben
[1] https://github.com/bgamari/ghc-utils
[2] https://github.com/bgamari/ghc-utils/blob/master/rel-eng/upload.sh
Ben Gamari
2015-09-02 16:04:33 UTC
Permalink
Post by Richard Eisenberg
I think some of my idea was misunderstood here: my goal was to have
quick releases only from the stable branch. The goal would not be to
release the new and shiny, but instead to get bugfixes out to users
quicker. The new and shiny (master) would remain as it is now. In
other words: more users would be affected by this change than just the
vanguard.
I see. This is something we could certainly do.

It would require, however, that we be more pro-active about
continuing to merge things to the stable branch after the release.
Currently the stable branch is essentially in the same state that it was
in for the 7.10.2 release. I've left it this way as it takes time and
care to cherry-pick patches to stable. Thusfar my poilcy has been to
perform this work lazily until it's clear that we will do
another stable release as otherwise the effort may well be wasted.

So, even if the steps of building, testing, and uploading the release
are streamlined more frequent releases are still far from free. Whether
it's a worthwhile cost I don't know.

This is a difficult question to answer without knowing more about how
typical users actually acquire GHC. For instance, this effort would
have minimal impact on users who get their compiler through their
distribution's package manager. On the other hand, if most users
download GHC bindists directly from the GHC download page, then perhaps
this would be effort well-spent.

Cheers,

- Ben
Alex Rozenshteyn
2015-09-03 13:59:55 UTC
Permalink
I have the impression (no data to back it up, though) that no small number
of users download bindists (because most OS packages are out of date:
Debian Unstable is still on 7.8.4, as is Ubuntu Wily; Arch is on 7.10.1).
Post by Ben Gamari
Post by Richard Eisenberg
I think some of my idea was misunderstood here: my goal was to have
quick releases only from the stable branch. The goal would not be to
release the new and shiny, but instead to get bugfixes out to users
quicker. The new and shiny (master) would remain as it is now. In
other words: more users would be affected by this change than just the
vanguard.
I see. This is something we could certainly do.
It would require, however, that we be more pro-active about
continuing to merge things to the stable branch after the release.
Currently the stable branch is essentially in the same state that it was
in for the 7.10.2 release. I've left it this way as it takes time and
care to cherry-pick patches to stable. Thusfar my poilcy has been to
perform this work lazily until it's clear that we will do
another stable release as otherwise the effort may well be wasted.
So, even if the steps of building, testing, and uploading the release
are streamlined more frequent releases are still far from free. Whether
it's a worthwhile cost I don't know.
This is a difficult question to answer without knowing more about how
typical users actually acquire GHC. For instance, this effort would
have minimal impact on users who get their compiler through their
distribution's package manager. On the other hand, if most users
download GHC bindists directly from the GHC download page, then perhaps
this would be effort well-spent.
Cheers,
- Ben
_______________________________________________
ghc-devs mailing list
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
Simon Peyton Jones
2015-09-07 14:57:10 UTC
Permalink
Merging and releasing a fix to the stable branch always carries a cost: it might break something else. There is a real cost to merging, which is why we've followed the lazy strategy that Ben describes.

Still, even given the lazy strategy we could perfectly well put out minor releases more proactively; e.g. fix one bug (or a little batch) and release. Provided we could reduce the per-release costs.

Simon

| -----Original Message-----
| From: ghc-devs [mailto:ghc-devs-***@haskell.org] On Behalf Of Ben
| Gamari
| Sent: 02 September 2015 17:05
| To: Richard Eisenberg
| Cc: GHC developers
| Subject: Re: more releases
|
| Richard Eisenberg <***@cis.upenn.edu> writes:
|
| > I think some of my idea was misunderstood here: my goal was to have
| > quick releases only from the stable branch. The goal would not be to
| > release the new and shiny, but instead to get bugfixes out to users
| > quicker. The new and shiny (master) would remain as it is now. In
| > other words: more users would be affected by this change than just
| the
| > vanguard.
| >
| I see. This is something we could certainly do.
|
| It would require, however, that we be more pro-active about continuing
| to merge things to the stable branch after the release.
| Currently the stable branch is essentially in the same state that it
| was in for the 7.10.2 release. I've left it this way as it takes time
| and care to cherry-pick patches to stable. Thusfar my poilcy has been
| to perform this work lazily until it's clear that we will do another
| stable release as otherwise the effort may well be wasted.
|
| So, even if the steps of building, testing, and uploading the release
| are streamlined more frequent releases are still far from free.
| Whether it's a worthwhile cost I don't know.
|
| This is a difficult question to answer without knowing more about how
| typical users actually acquire GHC. For instance, this effort would
| have minimal impact on users who get their compiler through their
| distribution's package manager. On the other hand, if most users
| download GHC bindists directly from the GHC download page, then
| perhaps this would be effort well-spent.
|
| Cheers,
|
| - Ben
Bardur Arantsson
2015-09-07 16:05:56 UTC
Permalink
Post by Simon Peyton Jones
it might break something else. There is a real cost to merging, which
is why we've followed the lazy strategy that Ben describes.
A valid point, but the upside is that it's a very fast operation to
revert if a release is "bad"... and get that updated release into the wild.

Regards,
Eric Crockett
2015-09-10 15:07:59 UTC
Permalink
Some people had asked what the users want and about typical usage, so I'll
give the my perspective. I consider myself a pretty typical user of
Haskell: PhD student (in theory, not languages), but still pushing the
boundaries of the compiler. I've filed quite a few bugs, so I have
experience with having to wait for them to get fixed. My code at various
points has been littered with "see ticket #xxx for why I'm jumping through
three hoops to accomplish this". As a result, I would be interested in
getting builds with bugfixes. For example see the discussion on #10428:
https://ghc.haskell.org/trac/ghc/ticket/10428. It's hard for a user to tell
if/when a patch will be merged. I'm using 7.10.1 at the moment, but I was
unsure if the patch for #10428 made it to 7.10.2.

Ben: I download the GHC bindist directly from the GHC page precisely
because the one on the PPA is (inevitably) ancient.

Upgrading GHC (even minor releases; I just tried 7.10.2 to confirm this) is
a pain because I have to spend an hour downloading and re-building all of
the packages I need. However, I'd certainly be willing to do that for bugs
that affect my code. Richard said, "Then a user's package library doesn't
have to be recompiled when updating". If he means that I wouldn't have to
do that, that's fantastic. However, I still wouldn't download every tiny
release due to the 100mb download+install time to fix bugs that don't
affect me (I'd only do that for bugs that *do* affect me).

In short: I'd really like to have builds for every bug (or maybe every
day/week) that I can easily download and install.
Post by Bardur Arantsson
Post by Simon Peyton Jones
it might break something else. There is a real cost to merging, which
is why we've followed the lazy strategy that Ben describes.
A valid point, but the upside is that it's a very fast operation to
revert if a release is "bad"... and get that updated release into the wild.
Regards,
_______________________________________________
ghc-devs mailing list
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
Continue reading on narkive:
Loading...