Hacker Newsnew | past | comments | ask | show | jobs | submit | zarvox's commentslogin


It's substantially less risky, less invasive, and easier to do in parallel with other ongoing feature development than a port to another language would be. It's much easier to get incremental value for the investment if you're keeping the same language and codebase than if you're trying to replace everything all at once.


I've recently been building an IRC bouncer and webapp with Actix, and it's been really smooth sailing so far -- excellent documentation, extensive examples, and everything I've touched so far has just worked the way you'd expect it to. It's a gem of a project.


Wanted to do the same exact thing! Is it public?


Not yet (still incomplete) but I'd be happy to drop you a line if/when that changes - send me an email?


My impression is that bus1 appears to have taken the (copious) feedback from the kdbus debacle and actually applied it and looked at other platforms' IPC to build a novel IPC system for Linux worth using. There's a talk [1] about the design of bus1, and comparison against IPC on other platforms where IPC is saner, and how the capability model is the right design for IPC - composable, understandable, and secure-by-default. It strikes me that the bus1 devs arrived at their design after doing the things you suggested! :)

Is there something I'm missing? What might an ideal IPC API look like to you?

[1] - https://www.youtube.com/watch?v=6zN0b6BfgLY


> What might an ideal IPC API look like to you?

Erlang's internal IPC is doing it pretty well since before there was a Linux kernel. I don't know if that approach can be ported into the kernel and how complex they are behind the scenes, but spawn / send / receive are apparently simple concepts.

In Erlang: http://erlang.org/doc/getting_started/conc_prog.html#id68696

In Elixir: http://elixir-lang.org/getting-started/processes.html


Authentication, authorization, and resource quotas for agents are not really addressed in the Erlang model, but would be expected for IPC on a Unix-like system.


Do we really want to put all of that into the kernel and not implementing it in userland? I get the feeling that it's too much application dependent, not enough general principles.

Maybe I just don't understand the problem they are out to solve.


The reason for a bus-style IPC implemented in the kernel is the same that sendfile(2) exists. I doubt anyone thinks it's the pinnacle of great design, but reduced copies and context switching for real application workload: sometimes the more 'proper' design is sacrificed for practicality.


What might an ideal IPC API look like to you?

I can't answer that because I don't have a PhD related to IPC, and I haven't done the necessary research to fully understand the field. I have looked at how some other systems are doing it, but I know that is not a strong enough knowledge-base to build a good IPC system.

I do have enough understanding to know that when I look at a good IPC api, I will look at it and say, "wow, that's really nice."


Which IPC apis do you consider really nice? And which not?


What do you think of zeromq?


Submitted this to Our Incredible Journey [1] which documents a long string of services shut down post-acquisition.

[1]: http://ourincrediblejourney.tumblr.com/


Why is this site so hard to read?


Because some of the screenshots are JPEG format. JPEG compression has trouble with sharp edges.


For folks looking for escape rooms all over the world: take a look at http://escaperoomdirectory.com/


http://www.freedesktop.org/wiki/Software/systemd/TheCaseForT...

In short: because having things in /usr is equally compatible, and makes some useful things like atomically snapshotting /usr to snapshot all executables, mounting /usr readonly, etc. possible.


Ah I had not considered snapshotting just /usr before. Thanks for the link, it makes a lot more sense now!


Relevant thread from the designers of Cap'n Proto and Flatbuffers from around the date of the Flatbuffers release: https://news.ycombinator.com/item?id=7901991


And from that thread, here's the Cap'n Proto author comparing Cap'n Proto, FlatBuffers, and SBE -- https://capnproto.org/news/2014-06-17-capnproto-flatbuffers-...


Indeed.

CRUD apps (which I'll argue are the vast majority of applications) are easier to write, easier to understand, and easier to make have the behavior users expect when there's only one datastore and you never have to deal with eventual consistency or distributed or half-applied migrations.

As an anecdote, at a previous employer, we provided user accounts and OAuth for connecting these accounts to our API. We made separate user and OAuth services, with separate databases.

What resulted was an unnecessary amount of complexity in coordinating user deactivation, listing OAuth tokens for users, delegation, and doing all that while authenticating requests between microservices. Our API could not safely expose a method to deactivate a user and all of their OAuth tokens in a single DELETE. A single instance that handled both would have been easier to build and wasted less time up front dealing with complexity that we didn't need or make good use of for our scale at the time.

To solve this, we eventually merged all the data back into a single database, so we could expose sane invariants at the API level without needing to build an eventually-consistent message queue.


Isn't that more an issue of multiple databases than an SOA issue?


Perhaps, but even a single database alone doesn't ensure atomicity.

For instance: suppose you're deactivating a user. You've got a user table, with an id column and a "deactivated" column. Then you've got an OAuth tokens table, with a foreign key column to userid.

If you had two microservices hitting the same database, then you make a DELETE to the user service, which now needs to send a DELETE to the OAuth service. Now, regardless of which transaction you logically have go first, you have a race: two services with separate transactions are modifying the same DB. Whichever transaction commits, the other could fail, leaving your external view inconsistent.

One way to solve this is to have the OAuth service check the user table to see if the user is disabled, and treat all of the tokens for that user as deactivated, but then your OAuth service is tightly coupled to your user service's schema, which means you can no longer modify the two services separately. My impression is that this isn't really what people mean when they say "microservices".

Another option is to have the OAuth service ask the user service if the user is still live, but now you have a circular dependency between services, and either one failing can effectively put the other out of commission.

Tradeoffs in all things.


OAuth is better considered a microservice that grew a little too big. Strong consistency in identity management is a well studied problem. This is why people pay money for Active Directory consultants.

edit: The trick to your particular dilemma is to design your operations more carefully. Don't allow sensitive operations for any service via long lasting authentication tokens.


Why should oauth be separate from user service?


There's also pastebinit, which is packaged by at least the major distros and supports multiple pastebins.

http://launchpad.net/pastebinit


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: