Hacker Newsnew | past | comments | ask | show | jobs | submit | jez's commentslogin

> Additionally, some repos can be configured to automatically merge PRs when all requirements are met, one of which might be your approval.

If anyone at GitHub is reading this, I’d love a fourth checkbox in the “leave a review” modal that is “Approve but disable auto merge” (alongside Comment/Approve/Request changes)! Even just surfacing “this PR has auto merge enabled” near the Approve button would be great.


You might try adding this branch protection rule to require conversation resolution: https://docs.github.com/en/repositories/configuring-branches...

This feels like a suboptimal solution to me because I personally like to keep comments in "unresolved" state so that they remain visible and other folks can weigh in on them if they want, but in a way that doesn't block the PR. Basically I wish that GitHub would either separate the "collapsed" and "resolved" concepts, or add this "approve without merging" button.

I want the same mode, but on iOS! Imagine carrying nothing but the phone in your pocket, sitting down at your desk, plugging your phone into the monitor, which has your keyboard and mouse docked, and you have a full development environment.

Partially there on Android Pixels with "Linux Terminal". With the rumored convergence of ChromeOS and Android, it should be possible to have a desktop ChromeOS pKVM VM with accelerated vGPU graphics on Android mobile devices that have enough RAM.

You can sort of do that but you’re VNCing into a remote device.

Even in Vim, the editing experience falls over when making markdown tables that have non-trivial content in their cells (multiple paragraphs, a code block, etc.). I recently learned that reStructuredText supports something called "list tables":

https://docutils.sourceforge.io/docs/ref/rst/directives.html...

Where a table is specified as a depth-2 list and then post processed into a table. Lists support the full range of block elements already: you can have multiple paragraphs, code blocks, more lists, etc. inside a list item.

This syntax inspired the author of Markdoc[1] (who came from an rST background) to support tables using `<hr>`-separated lists[2] instead of nested lists (to provide more visual separation between rows).

I have found various implementations of list table filters for Pandoc markdown[3][4], but have never gotten around to using any of them (and I've tossed around ideas of implementing my own).

[1] https://markdoc.dev

[2] https://markdoc.dev/docs/tags#table

[3] https://github.com/pandoc-ext/list-table

[4] https://github.com/bpj/pandoc-list-table


reStructuredText & AsciiDoc are so, so much better than Markdown since they have rich feature sets to actually build documentation, blogging, & so on. It’s a massive shame everyone would prefer _yet another Markdown fork_ like the OP.


What is your Linux photo editing software of choice?



Damn, there really are no original ideas anymore. I have been working on essentially the exact thing that Spektrafilm is doing. I'll check that out to see how I can improve my setup.


Still the only thing I miss about the Firefox right-click context menu coming from Chrome is that Firefox doesn't have a "Look up '<selection>'" in the menu on macOS, to look up in the macOS dictionary, for looking up words I don't know.

https://bugzilla.mozilla.org/show_bug.cgi?id=1116391


A more complicated version of this problem exists in TypeScript and Ruby, where there are only arrays. Python’s case is considerably simpler by also having tuples, whose length is fixed at the time of assignment.

In Python, `x = []` should always have a `list[…]` type inferred. In TypeScript and Ruby, the inferred type needs to account for the fact that `x` is valid to pass to a function which takes the empty tuple (empty array literal type) as well as a function that takes an array. So the Python strategy #1 in the article of defaulting to `list[Any]` does not work because it rejects passing `[]` to a function declared as taking `[]`.


Another fun consequence of this is that you can initialize otherwise-unset file descriptors this way:

    $ cat foo.sh
    #!/usr/bin/env bash

    >&1 echo "will print on stdout"
    >&2 echo "will print on stderr"
    >&3 echo "will print on fd 3"

    $ ./foo.sh 3>&1 1>/dev/null 2>/dev/null
    will print on fd 3
It's a trick you can use if you've got a super chatty script or set of scripts, you want to silence or slurp up all of their output, but you still want to allow some mechanism for printing directly to the terminal.

The danger is that if you don't open it before running the script, you'll get an error:

    $ ./foo.sh
    will print on stdout
    will print on stderr
    ./foo.sh: line 5: 3: Bad file descriptor


With exec you can open file descriptors of your current process.

  if [[ ! -e /proc/$$/fd/3 ]]; then
      # check if fd 3 already open and if not open, open it to /dev/null
      exec 3>/dev/null
  fi
  >&3 echo "will print on fd 3"
This will fix the error you are describing while keeping the functionality intact.

Now with that exec trick the fun only gets started. Because you can redirect to subshells and subshells inherit their redirection of the parent:

  set -x # when debugging, print all commands ran prefixed with CMD:
  PID=$$
  BASH_XTRACEFD=7
  LOG_FILE=/some/place/to/your/log/or/just/stdout
  exec 3> >(gawk '!/^RUN \+ echo/{ print strftime("[%Y-%m-%d %H:%M:%S] <PID:'$PID'> "), $0; fflush() }' >> $LOG_FILE)
  exec > >(sed -u 's/^/INFO:  /' >&3)
  exec 2> >(sed -u 's/^/ERROR: /' >&3)
  exec 7> >(sed -u 's/^/CMD:   /' >&3)
  exec 8>&1 #normal stdout with >&8
  exec 9>&2 #normal stderr with >&9
And now your bash script will have a nice log with stdout and stderr prefixed with INFO and ERROR and has timestamps with the PID.

Now the disclaimer is that you will not have gaurantees that the order of stdout and stderr will be correct unfortunately, even though we run it unbuffered (-u and fflush).


Nice! Not really sure the point since AI can bang out a much more maintainable (and sync'd) wrapper in go in about 0.3 seconds

(if runners have sh then they might as well have a real compiler scratch > debian > alpine , "don't debug in prod")


If you just want to print of the terminal even if normal stdout/stderr is disabled you can also use >/dev/tty but obviously that is less flexible.


Interesting. Is this just literally “fun”, or do you see real world use cases?


The aws cli has a set of porcelain for s3 access (aws s3) and plumbing commands for lower level access to advanced controls (aws s3api). The plumbing command aws s3api get-object doesn't support stdout natively, so if you need it and want to use it in a pipeline (e.g. pv), you would naively do something like

  $ aws s3api get-object --bucket foo --key bar /dev/stdout | pv ...
Unfortunately, aws s3api already prints the API response to stdout, and error messages to stderr, so if you do the above you'll clobber your pipeline with noise, and using /dev/stderr has the same effect on error.

You can, though, do the following:

  $ aws s3api get-object --bucket foo --key bar /dev/fd/3 3>&1 >/dev/null | pv ...
This will pipe only the object contents to stdout, and the API response to /dev/null.


Would be nice if `curl` had something to dump headers to a third file descriptor while outputting the response on stdout.


This should work?

  curl --dump-header /dev/fd/xxx https://google.com
or

  mkfifo headers.out
  curl --dump-header headers.out https://google.com
unless I'm misunderstanding you.


Ah yeah, `/dev/fd/xxx` works :) somehow thought that was Linux only.


(Principal Skinner voice) Ah, it's a Bash expression!


I have used this in the past when building shell scripts and Makefiles to orchestrate an existing build system:

https://github.com/jez/symbol/blob/master/scaffold/symbol#L1...

The existing build system I did not have control over, and would produce output on stdout/stderr. I wanted my build scripts to be able to only show the output from the build system if building failed (and there might have been multiple build system invocations leading to that failure). I also wanted the second level to be able to log progress messages that were shown to the user immediately on stdout.

    Level 1: create fd=3, capture fd 1/2 (done in one place at the top-level)
    Level 2: log progress messages to fd=3 so the user knows what's happening
    Level 3: original build system, will log to fd 1/2, but will be captured
It was janky and it's not a project I have a need for anymore, but it was technically a real world use case.


One of my use-cases previously has been enforcing ultimate or fully trust of a gpg signature.

    tmpfifo="$(mktemp -u -t gpgverifyXXXXXXXXX)"
    gpg --status-fd 3 --verify checksums.txt.sig checksums.txt 3>$tmpfifo
    grep -Eq '^\[GNUPG:] TRUST_(ULTIMATE|FULLY)' $tmpfifo
It was a while ago since I implemented this, but iirc the reason for that was to validate that the key that has signed this is actually trusted, and the signature isn't just cryptographically valid.

You can also redirect specific file descriptors into other commands:

    gpg --status-fd 3 --verify checksums.txt.sig checksums.txt 3>(grep -Eq '^\[GNUPG:] TRUST_(ULTIMATE|FULLY)')


This is often used by shell scripts to wrap another program, so that those's input and output can be controlled. E.g. Autoconf uses this to invoke the compiler and also to control nested log output.


Red hat and other RPM based distributions recommended kickstart scripts use tty3 using a similar method


Multiple levels of logging, all of which you want to capture but not all in the same place.


Wasn't the idiomatic way the `-v` flag (repeated for verbosity). And then stderr for errors (maybe warning too).


It is, and all logs should ideally go to stderr. But that doesn’t let you pipe them to different places.


Yes, but sometimes you want just important non-error logs to go to the console or journal, and then those plus verbose logs to go to a file that gets rotated, and then also stderr on top of that.


For comparison, Visa's stated FY 2025 (ended Sep 30, 2025) payments volume was $14.2T.

rough math, but:

$14.2T / $1.9T * 1.6% = 12% global GDP


I was curious, and the American Clearing House has a TPV of $93 trillion, which means ACH is 78%?? That seems too high.

Oh - not all bank transfers count in GDP. I often move money from one account to another.

Note that Visa has the same issue: withdrawing money from an ATM shouldn’t count towards GDP! Neither does Vemo-ing a friend to settle up a split restaurant bill (my Venmo is attached to my debit card).


At least it’s not 24.9%

Americans and credit have an unhealthy relationship.


Not all VISA or Mastercard transactions are credit backed, I'd argue that the large majority aren't anymore they're more commonly debit VISA/Mastercard


Paypal TPV YoY growth for 2025 was 7%[1].

Stripe cites 34% growth for the same period and metric.

[1]: https://s205.q4cdn.com/875401827/files/doc_financials/2025/q...


Thats not bad for a mature business like paypal


I mean it's not like Stripe was founded yesterday. Stripe: 2010 Paypal: 1998

I'd argue that 99% of the "internet gdp" happened after Stripe was founded


I’m not the most well versed but isn’t that still insane to be 4x valuation of PayPal? Maybe it’s more PayPal valuation being crap vs Stripe being too high. Adyen is close to PayPal with a PE of 30 (vs PayPal’s sub-10) and Adyen like PayPal is close to being back to its IPO level.

PayPal seems crazy when it has acquired businesses like Honey (probably hasn’t helped) and Braintree/Venmo since then. Pretty funny PayPal was spun off as the better growth stock but eBay has tripled since then and their market caps are the same now.


The tender offer announced in the article is open to former employees as well, so they personally profit regardless of Stripe being public (unless the claim is that by being public the valuation would be materially higher than the stated valuation for this offer).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: