Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I assume that someone working on the project would do: pip install -e . in a virtual environment. I thought this was quite well-established. Is there a problem with it that I'm not aware of?

So ignoring your requirements.txt, and potentially working with different versions of dependencies from the ones you were working with and encountering different bugs?

(Also managing your virtual environments "by hand" is tedious and error-prone when you're working on multiple projects).

> pip freeze > requirements.txt for requirements.txt generation.

The problem with this is that it's not reproducible - if two people try to run it they might get different results, and it's not at all obvious who should "win" when the time comes to merge. If you mess up the merge and re-run then maybe you get a different result again, and have to do all your testing etc. over again.

> For libraries just omit this?

Maybe, but then you'll face a lot of bug reports from people who end up running your library against different versions of upstream libraries from the ones that you tested against.



People working on your project have the choice of using the requirements.txt or not. I would think core developers use the loose dependencies, with the aim of testing the latest and fixing the bugs. Someone has to move dependencies forward at some point, and doing this locally for knowledgeable people seems reasonable. CI should definitely - and part time contributors should probably - just use the pinned dependencies.

This is why I would not worry about pip freeze being non-reproducible. It is a manual step: upgrade our dependencies. Testing should happen all the time. If you are happy with the result of testing after upgrading dependencies, commit requirements.txt. I don't see new tools easing the burden of co-ordinating and testing dependency upgrades. Did I misunderstand them in this context?

I don't understand the concern for the library case. Pipenv doesn't address libraries. It seems to be an explicit goal of many people not to pin library dependencies. I'm asking what the new tools are solving - and again I can't see that they are solving this. Nothing is preventing you from pinning your library dependencies if you want (using old tools) but you'll probably get people complaining about being incompatible with other projects.


> I would think core developers use the loose dependencies, with the aim of testing the latest and fixing the bugs. Someone has to move dependencies forward at some point, and doing this locally for knowledgeable people seems reasonable.

Agreed that developers should be moving the dependencies forward, but you want to do that as a deliberate action rather than by accident. E.g. if you want to consult another developer about a bug you're experiencing, you want them to be on the same versions of dependencies as you.

> This is why I would not worry about pip freeze being non-reproducible. It is a manual step: upgrade our dependencies. Testing should happen all the time.

It's a manual step, but you still want to be able to reproduce it. E.g. if a project is in maintenance mode, you want to be able to do an upgrade of one specific dependency without having to move onto new versions of everything else.

I don't work in Python any more so I don't know what the new tools do or don't do, I was just starting from your "I don't know why I would not have loose dependencies in setup.py and concrete, pinned dependencies in requirements.txt." and I know that workflow gave me a number of problems that I simply don't have when working in other languages. So I'm hoping that Python has caught up with the things that are known-working elsewhere, but maybe not.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: