What’s the relationship between Selenium, Puppeteer and Webdriver BiDi? I’m a happy user of Playwright. Is there any reason why I should consider Selenium or Puppeteer?
> Is there any reason why I should consider Selenium or Puppeteer?
I'm not a heavy user of these tools, but I've dabbled in this space.
I think Playwright is far ahead as far as features and robustness go compared to alternatives. Firefox has been supported for a long time, as well as other features mentioned in this announcement like network interception and preload scripts. CDP in general is much more mature than WebDriver BiDi. Playwright also has a more modern API, with official bindings in several languages.
One benefit of WebDriver BiDi is that it's in process of becoming a W3C standard, which might lead to wider adoption eventually.
But today, I don't see a reason to use anything other than Playwright. Happy to read alternative opinions, though.
Both Selenium and Playwright are very solid tools, a lot simply comes down to choice and experience.
One of the benefits of using Selenium is the extensive ecosystem surrounding it. Things like Selenium grid make parallel and cross-browser testing much easier either on self hosted hardware or through services like saucelabs.
Playwright can be used with similar services like browserstack but AFAIK that requires an extra layer of their in-house SDK to actually make it work.
Selenium also supports more browsers, although you can wonder how much use that is given the Chrome dominance these days.
Another important difference is that Playwright really is a test automation framework, where Selenium is "just" a browser automation library. With Selenium you need to bring the assertion library, testrunner, reporting in yourself.
I think Playwright depends on forking the browsers to support the features they need, so that may be less stable than using a standard explicitly supported by the browsers, and/or more representative of realistic browser use.
I am an active user of both Selenium and Puppeteer/Pyppeteer. I use them because it's what I learned and they still work great, and explicitly because it's not Microsoft.
The article states the wrong resolution for the Apple display and it’s an interesting mistake because these days there are actually 2 versions of 6K in consumer-marketed computer monitors: the one used by the Apple display (6016x3384) and the slightly larger one used by the Dell U3224KB 6K that came out earlier this year (6144 x 3456). In fact, an interesting thing people found out when they use the Dell 6K display on Intel MacBook Pros running Mac OS between 10.15 and 13.6 is that the Mac cannot do Display Stream Compression at the Dell's native 6144 x 3456, hence the Mac can only drive the monitor at 30hz instead of 60hz. However, if they can fool the Mac into thinking the display is 6016x3384 (same as the Apple display), DSC magically works and they get 60hz on the Dell (at the expense of sacrificing some screen real estate). Apple must probably hardcode the 6016x3384 resolution somewhere in their OS code. Thankfully people report that this problem has been fixed as of Mac OS 14.1 but that bug existed for 4 years.
Edit: this problem only seems to happen on Intel, not Apple silicon machines.
> that the Mac cannot do Display Stream Compression at the Dell's native 6144 x 3456,
Can't, or won't? M1 MacBook pros for some reason can't do 4k120 over hdmi unless you buy a specific usbc-hdmi adapter and fool it into thinking it's displayport (or something like that, I'm paraphrasing. You can find info if you search for cablematters DDC 4k120 m1.)
There's no "fool it into thinking it's displayport". What you're describing is having the Mac actually literally emit a DisplayPort signal, and a separate device converting that to an HDMI signal. The USB-C HDMI Alt mode standard was never implemented by any real products, and all USB-C to HDMI converters are active adapters that consume DisplayPort signals and emit HDMI signals. Not all of those support HDMI 2.1, which introduced a drastically different signalling mode for HDMI in order to support much higher data rates (and also added display stream compression, further increasing the maximum resolution and refresh rate capabilities).
The custom firmware isn't actually all that interesting of a point, because slightly broken display behavior is extremely common if you look closely at anything other than normal everyday TV resolutions and refresh rates.
USB-C/DP to HDMI adapters often need to do some amount of rewriting EDID information because they need to be transparent to the host computer and the display, so it's the adapter that's responsible for ensuring that modes that cannot be handled on both sides of the adapter are not advertised to the PC. When you layer that complexity on top of the existing minefield of ill-conceived EDID tables widespread in monitors, on top of the limitations of macOS (limited special-case EDID handling, little to no manual overrides/custom mode settings), it would be more surprising if there weren't some common use cases that theoretically ought to work but are simply broken. Applying the necessary EDID patch via adapter firmware is simply the easiest option where macOS is involved.
Even on Windows with a DP cable directly from GPU to display it's not all that rare to need a software override for EDID in order to use modes that ought to work out of the box (eg. I have a recent Dell monitor that cannot simultaneously do HDR and variable refresh rate out of the box).
That "some reason" is that a standard DP-to-HDMI 2.1 protocol converter can't negotiate beyond HDMI 2.0 link rates without the host computer knowing about and doing FRL training on the HDMI side. Completely unrelated to any limitations related to 6144 x 3456.
As I understand it, automatic fallback/limitation to HDMI 2.0 speeds was desired by VESA in the event of using an 18gbps cable or other signal integrity issue, so ultimately they chose to require the host to be more aware of HDMI for the converter to enable HDMI 2.1 speeds rather than requiring the converter to be smart.
Yes, as a specific example, if the HDMI sink wants DSC, maximizing the quality (minimizing the compression ratio) fundamentally cannot be done without knowing the end-to-end bandwidth.
I can verify that an M1-M3 Mac running Sonoma (14.1.1) the U3224KB supports 6144 x 3456 at 30bit (60Hz). Under Ventura this did not work. Seems fixed now.
I think the real punishment for Apple is that it’ll be harder for their employees to get their PERMs certified i.e. more likely to be denied/audited (audit adds 5-6 months on top of the normal 10-11 months), which has been the case for Facebook since the same settlement with DOJ.
1) PERM processing time keeps getting longer and longer (now at 11 months, up from 5-6 months a few years ago). Do you know why or if DOL has any plans to improve it?
2) What’s the current average PERM-based I-485 processing time you’re seeing in your office? Any processing time advantage to submitting I-485 separately versus concurrently with I-140?
1. I thought that one of the purposes of the new PERM form and system was to improve the overall processing but things have only gotten worse and the expectation unfortunately is not that things will get better.
2. All over the place although the majority of employment-based I-485 applications seems to be approved within 6 months.
I’ve been using the new yarn (with workspaces and Plug’n’Play) on a reasonably complex JS project for about 2 years and I think it works great. Congrats to the yarn team on this big release.
I’ve actually tried Paw but I found it tricky because it doesn’t have a quick fuzzy search for existing API requests, which is something I use a lot in Insomnia.
I used to lead the front-end development of this site, including all the visualizations. Glad to see it make HN front page. The team building this consists of really great people and it’s a great place to work. Most visualizations were custom built to various degrees.
Fun fact: the tree map was done with vanilla WebGL2 completely from scratch (handwritten custom shaders, no ThreeJS or Babylon) for performance and I think it’s probably the fastest snd smoothest animating tree map you can find on the Internet.