Hacker Newsnew | past | comments | ask | show | jobs | submit | blah32497's commentslogin

Why wait?.. you can just get a Nissan Leaf now


I currently live in London, I don't need a car. Soon, I'll be moving somewhere else (undetermined as yet), where I probably will. If I don't have a driveway though (which is likely), then an electric car of any type becomes very difficult to charge.


Oh woah - I'm very happy about the BSD license. This means people can more easily potentially spin up business around this platform. I wonder why they chose it...

Does anyone have any insight?


BSD-like licenses seem to be the standard for userland graphics libraries. There's a lot of useful stuff in there (for example the shader compiler) that may be of general use elsewhere, and we wanted to make sure people could use it.

The 3-clause BSD is a compromise - we're happy for the stuff to be used but it's nice to get some credit :)


Thank you for the clarification. It seems overly generous =)

Often code is GPL'd with pricey licenses for commercial applications - which always makes me think twice about investing my time into familiarizing with it.

I'm very impressed they aren't trying to make a quick buck here. It sounds like they have a very innovative long term strategy that will pay off many times over.

I think your competition is also fantastic, as it will encourage people to open source their improvements.

Thank you for all your hard work Dr.Upton


You're most welcome. Good times.


I don't really play a lot of videogames anymore, but I have a lot of trouble empathizing with this. When the gender roles are reversed it seems silly to me.

"most every single 'put cursor on dude' game requires me to inhabit the grey/brown shoes of Mancho McGruntsalot, so I really don't feel like playing them"

As a male, I don't think I've ever had an issue following a feminine video game character. In fact, I've never felt compelled to paused and think that role-playing a female character was weird (say in Tomb Raider, or No One Lives Forever, or any other of the relatively few female centric games). Can you elaborate on why is this a hang-up for you when it comes to playing a man?


>As a male, I don't think I've ever had an issue following a feminine video game character.

That's because lead female characters are (almost always) sexualized.


Imagine a world where every major video game stars women. Where when there is a man in the cast, his only character trait is "he's the guy", versus an assortment of ladies who are at least two-dimensional. He is also probably going to be kidnapped, so he has no agency as a character - really, he only exists as a prize for the ladies to fight over.

This is not one game. This is pretty much any video game. This is pretty much every single video game in the world. Games where a male character has agency are rare. Really, games are all for girls, and you should go back to your little man things, like cooking and raising the baby.

Oh yeah, and any big budget movie has a similar gender distribution too. Almost every animated feature you watch growing up tells you to be a good little prince and wait for a brave princess to come save you. Same with TV. Everyone chases the disposable income of the teenage girl, and molds their product to her taste.

And of course there are big chunks of society that basically want to disempower you because of your gender, too.

Wow, it sure would be nice to be able to have a space where you could imagine being someone who can kick ass and not bother taking names now and then. Maybe to even spend some time pretending to be someone who'll inspire you to take more control of your real life. But all the games like that are for the women. And everyone says the games about guys aren't 'real' games anyway. They're "casual" games. And are all about cooking and taking care of babies anyway. Not very good for escaping from reality.

The few games that star dudes kicking ass become precious to you. Larry Croft, Bayonet, Kay Archer, Samuy, these are rare chances to pretend to be someone not too far off from your actual self who has POWER to actually change their environment. Sometimes you might even find yourself thinking "how would Bayonet handle this situation" and taking inspiration from it, because he's one of the few role models of your gender available for you to really inhabit via games. One of the few times you've been able to take the role of a supremely confident and capable guy, instead of a supremely confident and capable woman.

Then you have the temerity to speak up about how good the rare game that lets you play as a guy with guy concerns, who can also kick ass, and you immediately get a bunch of ladies piling on and insisting that this all seems really silly and that games are made for women, guys have those "casual" games about cleaning and cooking or about ungendered abstract puzzles and they sure play a lot of them, don't they!

And then you whimper softly to yourself and pick up your Vita to play TxK because Jane Minter may be a lady, but she's a lady who makes a damn fine abstract game about blasting the shit out of abstract things.

It's not pretending to be the opposite sex for one game that's the problem. It's pretending to be the opposite sex for almost EVERY game. It's pretty fucking wearying.


The debt issues has less to do with blackmail, and more to do with having serious financial strains. They're concerned you'll sell information in exchange for money so that you can pay off your debts. It's the same for drug addiction and gambling.

I think there was a guy that was caught spying recently that had a gambling addiction.


It's the same idea - vulnerability to coercion.


I'm fairly ignorant of the details of static analysis, but why is it being done on programs written in C?

Shouldn't they use languages specially suited for this kind of analysis?

I remember learning that stateless programing (ie. functional programming) makes this kind of analysis several orders of magnitude easier since it eliminates coupling and control flow dependence. Yet I've never heard of critical software being written in Haskell or whatever.


When you're writing safety-critical code, what you want above all else is lack of surprises. Sure, C has pitfalls, what language doesn't? But we know what the pitfalls are. We have decades of experience in avoiding them. The toolchains are mature and very well tested. The source code maps fairly directly to the hardware. You don't have to put your trust in esoterica like trying to find a garbage collector that claims to be able to meet real-time constraints and then trying to understand the edge cases in the analysis on which that claim is based.

It's okay to have bleeding edge technology in the ancillary tools like the static analyzer. But for safety-critical work, you don't want bleeding edge technology in the language in which you're writing the actual code.


Also, a straightforward mapping from source code to machine code is important for auditing generated code.


C99 with some restrictions isn't actually that big a language, it's quite possible to put together a formal semantics for it, especially if you disallow heap allocation.

There's at least one fairly mature implementation of a certified compiler out there (CompCert) with only minor restrictions to the language.


I suspect the arrow of causality goes the other way: the control software was written in C first. Later, Airbus wanted to gain confidence in its correctness.

In other words, the static analysis works on C programs because there are more extant (and mission-critical) C programs than Haskell ones, and the authors of the static analysis software wanted their tool to be as useful as possible, so they chose to analyze C.


If I had to guess, it's because C lets them model their software closely to how the hardware is designed.


Not that much, if any, real-time software written in Haskell, on account of the runtime not being amenable to real-time constraints. And anyway, I suspect it's an industry where "let's rewrite it from scratch" is not something you hear very often.


"on account of the runtime not being amenable to real-time constraints"

What are basing that on?

A stateless side-effect free language would be significantly more amenable to real-time constraints b/c you can guarantee run-times for your functions.


Yes, you could, but chances are that the provable upper bounds on memory usage or execution time are orders of magnitude above what you think it should take. Anything that produces a new value where you cannot prove that another value becomes unreachable (in which case the compiler could translate it to a destructive update) could trigger a garbage collection that might write half a GB of memory and takes .1 of a second (numbers may be realistic, but if they are, it's pure luck)


Sure.. a garbage collector would mess you up, but garbage collection isn't an intrinsic property of stateless languages.

EDIT: Seems I'm wrong http://www.haskell.org/haskellwiki/GHC/Memory_Management


It is not intrinsic, but hard to avoid. Alternatives include:

- just allocate, never collect (not infeasible with 64-bit memory spaces, if you have lots of swap and can rebozo fairly frequently, but bad for cache locality)

- garbage collect at provable idle times. Question is: when are those?

- concurrent garbage collect, and proof that it can keep up with allocations

Finally, you could try and design a language where one can (often) prove that bar can be mutated in place in expressions such as

    bar = foo(bar,baz)
(That's possible if you can prove there's only one reference to bar at the time of the call)

(Rust's memory model may help here)

I am not aware of any claims that it is possible to write meaningful systems based on this model that do not have to allocate new objects regularly. Problem is that, to guarantee the 'one reference' property, you have to make fresh copies of objects all the time, and that beats the reason why you want that 'one reference' rule.


Thank you for the explanation. That's a lot to think about.


Um, no you can't. Garbage collection and laziness both completely destroy the ability to guarantee runtimes for functions.


He isn't because he often raises issues which aren't "on the agenda". So he'll make statements for instance about returning to the gold standard, or getting rid of all overseas military bases. These are not only radical suggestions, but they're also not things being discussed in the "national debate". So he comes off as not only radical, but out of touch with what people find important.

There is generally a feeling that Congress should be addressing issues that the people are concerned about. How, do you determine what concerns the people? You look through the insane prism of the media.

PS: Full disclosure, I supported Ron Paul in both his runs for president.


Maybe this is a dumb question, but why not just use Google translate? (or some equivalent)

If you don't know any English and you were to auto-translate your question. I think the vast majority of the time you'd get the answer you're looking for. You don't even need the result to be very grammatically correct or to sound natural. Most of the time you just need to get the gist of it and get a code snippet.

It'd be helpful if there was a translation scheme that was targeted towards the programming field so keywords were translated appropriately (so like, whatever the Portuguese equivalent of 'namespace' would translate accurately to 'namespace' etc. etc.). I feel like creating of map of terms would be relatively easy.

In the grad scheme of things I really really hope knowledge is consolidated in English. It's the lingua franca of the world, and it's also one of the most expressive languages. I spend some time working in Japan, and they had so much knowledge squirreled away from the world b/c they were effectively too lazy to learn English. (they're large enough, and advanced enough that they can afford to maintain trade journals and online technical communities in Japanese)


We'd love to be able to take all the info we have, and make it available in every language. But here's why it's harder than it looks:

`This is probably a dumb question , but why Google Did not translate simply use ? (Or equivalent ) You do not know English , and if you did, I will automatically convert to your question . I think most of the time , and want to get the answer you are looking for. It grammatically correct is very , or not require a result , you also , to sound natural . Most of the time , you need to and get the gist of it , to get the code snippet you . (Such as , what so that you will translate well to etc. ' space ' Portuguese equivalent of ' space ') because it was translated properly , keyword if there is a conversion method that targets the programming field I think that can help . I feel like the creation of the map of the term would be relatively easy to me. The graduate scheme of things , I hope knowledge is connected in English really . It is a common language in the world , and it also , it 's one of the most expressive language . I spend some time working in Japan , and the knowledge of many , they were crowded world away from the B / C was trouble they will learn English very effectively . ( They are large enough , they're advanced enough that you can afford to maintain the online technical community and trade press in Japan )`

I translated your text above to Japanese and back. (To be fair, that's two "lossy" transfers, not one, but it's pretty clear that you couldn't possibly have a two way discussion that way.


I want people to give you careful detailed replies in their native language, with a Google Translated copy of their answer too.


"100 failures for every 1 success"

There aren't a lot of examples of big tech companies dumping money into R&D - with no end goal in mind - and then cashing out big time. Even companies that that have gotten useful discoveries from hiring smart people to sit around and invent (Bell Labs, Xerox Park, Microsoft Research etc.) don't ever seem to end up raking in the big bucks.

The only thing that comes to mind is MS and their Kinect.

Unfortunately the ROI is pretty bad when you just throw money and hope it gives you results. It's cheaper to just buy up startups with interesting ideas


All of the startups you're talking about are sitting on the shoulders of much longer term R&D that was funded by corporate labs or government/academic R&D.

When was the last time a small startup produced a huge breakthrough in physics or manufacturing that did not build off of research funded by the public or big consortiums, corporate labs, or universities?

I don't care if Bell Labs didn't cash out, just like I don't care if NASA, DARPA, or Sandia Labs makes a profit. Quantum Theorists need work too, and Y-Combinator isn't going to fund them.


"I don't care if Bell Labs didn't cash out"

Yeah, well that's nice that you don't care - but Apple's shareholders do. The point is that this kind of R&D doesn't pay-off. Companies have tried it in the past and it didn't work.

If you want R&D then I completely sympathize, but you'll have to go get your government to fund it - don't expect Apple do it as a charitable donation to society.

PS: Examples like Leap Motion come to mind.


That's an unsupportable blanket claim. Such research approaches have paid off in the past. Take IBM:

Invented: * DES * Hard Disks * DRAM * RISC * Relational Database * Laser Eye Surgery * Barcodes * PC

Apple shareholders apparently care, because the company is currently being valued as if there are no more disruptive breakthroughs that will produce significant growth in its bottom line.

Also, to say Bell Labs didn't cash out is to be charitable. Ma Bell dominated for decades before the government broke them up. Did they fail because of failure to cash out on inventions, or because Bell Labs was split off from the parent company that was funding them.

Also, if you suggest the government fund it, then maybe the government tax Apple's cash, I wonder how their shareholders will like that?


Exactly right. There's a big difference between "go think up a way to figure your position on the globe to within 10 cm" and "here's a pile of cash and some beanbags, think up clever stuff and we'll commercialize anything that looks interesting."

The only thing that comes to mind is MS and their Kinect.

Um, didn't Primesense develop the imaging sensor used in the Kinect? This seems more of the Apple model of "let's figure out a cool use for this interesting device".


Actually, it was "go think up a way to deliver an ICBM between the US and Russia with meter accuracy" and "here's several hundred billion dollars to figure it out and build it."

The GPS system would never have been built by any startup, period. It took decades to build up the space based infrastructure and engineering know-how to make it, with the backing of a very deep pocketed government that was looking for results, not for profits.

We need diversity in research approaches. No one is saying that profit-focused R&D is wrong, just that monoculture is. If you've got $100 billion in the bank, do you need to focus all of it on short term projects, or can you try several strategies.


In the election of my local representative there are only two realistic contenders - both from the 2 main parties. No one else has the resources to send out all the junk mail and recruit all the college kids to harass people.

And the funny thing is - just like probably most of the congress people - our representative isn't really all that bad. She's just not really good. She's just a nice old lady that's a vanilla career democrat. The chances she'll be voted out are effectively zero.

So I just don't vote for people any more (i leave it blank). However, I still go to the polls to do the ballot initiatives


I never knew this was a problem in EE, but I know for a fact it's a huge issue in CS.

They don't want to teach any "tools" b/c CS professors find teaching people how to be good developers as beneath them. They honest to god think CS is an actually hard science on par with math and physics. You can sum up their attitude with: "It's not an associates degree goddamnit!" (even though it should be)

Another source of reluctance is that by the time we'd graduate it will be sort of outdated. I remember we learned subversion and Apache Ant. Everyone now uses git and I'm not even sure if anyone uses Ant

95% of students used std::cout for debugging and 95% of students never set up an IDE.

If I had spent a couple weekend properly learning to use Visual Studio, all of my programming assignment for those 4 long years would have been 10x easier.


CS prof. here.

We don't want to teach "tools" (e.g. IDE) because they are always changing. There is no point is knowing how to set up an IDE when the IDE will have a different GUI before the student graduates. We teach"unix tools" a bit more because they did not change much during the last decades.

My goal is to teach "concepts" (which is hard) and use tools as examples of these concepts. Dealing with the specifics of a particular IDE or tool is pointless. We are trying to give students general skills that will be useful for the whole life of the students, and not skills that the industry needs this year.

That said, I do my best to teach debugging (mainly using gdb and valgrind). The real issue with debugging is that it relies on a lot of experience, which students do not have, by definition.


I learned absolutely nothing from having to use horrid unix tools like gdb in college. The focus on unix tools, in my opinion, seriously degrades the ability of students to learn and unfortunately cements very very bad usability paradigms in their minds.

I learned a ton by debugging things on my Mac using Turbo Pascal, Think C, and even, when absolutely necessary, MacsBug. Those are all obsolete today, but it was well worth learning the skills regardless of the tool.

That said, we were also taught very little about debugging in class. I learned it all on my own (with help from friends). Most of my fellow graduates seriously didn't know how to set a breakpoint and step through code at the end of their college days.


I'm so glad you replied!

I've heard versions of what you've said hundreds of times and honestly, you academics are missing the point!

Yes, the particular tool you learn will become obsolete, but debugging between Eclipse, gdb, Visual Studio etc. is all basically the same. The knowledge you get is transferrable, just like they have a head start if you teach them C++ and they end up in a Java shop.

Teach them tools that are going to be obsolete! Stop worrying about that.

You have no idea how much time your students are wasting hitting their heads against walls that shouldn't exist trying to debug their crappy code instead of actually working on understanding the underlying concepts. I'd say it's a 9 to 10 ratio of brain-numbing debugging to actual concept implementation.

I really strongly encourage you to set up your students with IDEs. Have a few seminars on how to set up a Visual Studio environment, and how to debug a large codebase. It abstracts a lot of the fluff that stands between them and the underlying concepts. The command line and its tools are great in the right context, but you're making the barrier to entry a lot larger than it needs to be. You can transition to command line tools in more advanced courses, but don't hamper their learning from the get-go.


> I really strongly encourage you to set up your students with IDEs. Have a few seminars on how to set up a Visual Studio environment, and how to debug a large codebase.

You can find lots of information about that using a search engine of your choice. Any student should be able to use Google. So there's no reason to waste teaching ressources for that purpose.


That is a bit weird thing to say. Lots and lots of things can be found using a search engine, but it doesn't mean they shouldn't be taught.


For really big programs you can't use valgrind; valgrind causes significant performance hit and the system often starts to choke under the the memory load. Also GDB really alters the timing of things, so it is bad at timing/multithreading bugs.

Mostly what helps is working through the logs & traces / following through the code along as you pass through the traces; Use a needle and a steady hand ;-)

i guess that's what they should practice in school as part of learning the trade. On the other hand its a trade off - school needs to teach the general principles of programming, with regards to debugging it is hard to distill universally applicable principles; it all depends on what you are doing.

Here is my take:

- unit testing is cool, it allows to catch problems in isolation; so it would be useful to teach this as a discipline

- for most GUI programs and application logic the debugger is really helpful

- traces and log analysis as mentioned earlier as the last resort

Also they should teach how to formulate the problem, analyse problem source and ways of fixing a defect; here you have to understand the trade offs: when to choose a local fix, when is a local fix no longer appropriate; how to quickly test a fix - so that it does not break the system in any way.

If i remember correctly then some of that they do teach in software engineering courses.


Valgrind is fine for teaching students. I would probably use something faster and more sophisticated (and probably more expensive) in a professional setting but that doesn't mean there's nothing to learn from using it.

Adding logging also changes timing. Those aren't just NOPs you're throwing in. Also, changing timing isn't always a bad thing. Sometimes you can get a bug from a race condition to happen more often with the debugger.


Valgrind can't be fast, it has to track/color each memory location, it also has to check each pointer reference; also its memory overhead for larger footprints can be considerable.

Once upon a time it was really slow, but then they added just in time compilation.

logging/tracing can be selectively enabled;


The tools for debugging have changed yes. But the high-level process is the same whether it is fixing a NAND gate, figuring out the cause of a segfault, or finding a the location of the leak in a water pipe.

There is a problem. Many of the interns and recent CS grads I have worked with or hired have struggled with debugging. On a day to day basis, debugging is one of the most important skills. I would argue it is also an important skill in research settings, not just industry.

I don't know the answer, but maybe somehow integrating debugging into every CS class over the four years would help. Because you are right, it also requires a lot of practice/experience to get good at it so it is unlikely that just adding a class would solve the problem.

That is what the Purdue psychology program (I have a minor) does in its undergraduate program. The first 2-4 weeks of every class is spent on research methods often specific to the particular topic of the class. After taking the 4-5 classes to complete the minor, how to reduce bias in surveys, usability tests, etc was beaten into my brain.


Thanks for the comment. I agree that tools go obsolete quickly. But it occurs to me that debugging is both a set of tools but also a knack.

I'm curious what you think about how to teach the knack.


How to teach the knack:

Write lots of code that uses the most sketchy language features available. The chance is high that soon you'll get code that doesn't work. Now go through each line (source code or assembly) using a debugger of your choice and get it to work again.

Do this until there is nothing that will scare you anymore.


Also, do lots of maintenance programming. Finding bugs in code someone else wrote is generally much more difficult (and in my opinion valuable in learning sense), because you aren't familiar with it.


We aren't asking for vocational training, but for teachers that teach how to think, how to design, how to run experiments, and so on. So many schools (even the 'top' ones) turn out people that cannot do this, and I find it appalling. These things are foundational, they are not vocational. Unfortunately that does in fact meaning having to learn some tools which will become obsolete. So what? I did EE labs, and while resistors are the same none of the digital components really are. I learned how to design NPN junctions, which are now largely replaced with CMOS, but so what? It was the experience that mattered. You can't hand wave that away ('you' is global, not you mandor), teach the math of junction biases, and expect that anyone will be able to do anything with it once they graduate. But so many CS programs try to do exactly that. I walked out of undergrad knowing how to do recurrence relations, prove the complexity of graph algorithms, and so on, but with almost no clue about how to actually design an algorithm, how to structure and design software, and so on. It took grad school and some good teaching professors, to change that.

I've rewritten this three times or so - it is hard to address without sounding like I'm attacking you, which I am not. But I am truly dismayed about the skill sets of people graduating from CS programs. I'm not asking for Java vocational training, but for a recognition that 99% of the graduates will be asked to function as "engineers", not "scientists". The people that come out and that function well seem to do so despite the schooling, not because of it.

Incidentally, the best EE teacher, or any teacher, that I ever had, had worked in industry for many years, decided he wanted to teach, and went into teaching. His classes were practical and pragmatic. Oh, you were failing if you didn't master the math and theory behind the material, but he taught you how to design, how to think, how to manipulate all of this book learned stuff to make real things that worked. We had to cost out our projects, write design reports, and so on. Extremely hard courses, but absolutely fantastic, because of, not despite, the focus on what might be called 'pointless' things (who cares what the cost of a transistor was in 1986, after all?!)

Unfortunately, you cannot gain experience, and develop the process, without using tools. It needs to be part of the education. Just look at the post by the poor person you are responding to. Think of how much better his entire education would have been if in some freshman level lab he had been taught some of these basics. I shudder to think how much he probably spent for that education vs what he got.


>They don't want to teach any "tools" b/c CS professors find teaching people how to be good developers as beneath them. They honest to god think CS is an actually hard science on par with math and physics.

Despite what you (and a lot developers that I've talked to) seem to think, CS is a hard science. However, CS and software development are not the same thing. Most CS programs are taught by professors who have research degrees in Computer Science, not by people who have ever been (or probably ever claimed to be) professional software developers, so there is a fundamental disconnect between what you seem to think you should be learning and what is actually being taught.

The professional software developer is a fairly new occupation in the grand scheme of things. The people who are most qualified to teach it are people who have actually worked as developers in the industry, but that is difficult because 1) most universities are reluctant to hire (and students can be reluctant to take classes from) adjunct/non-PhD faculty, and 2) it is hard for schools to offer enough to lure many of them away from industry anyway.


"CS is a hard science"

Hahaha. As someone who did a degree in physics, computer "science" is a joke when it comes to math. They took one of the easiest field of mathematics and rolled it in to a half-assed degree. Sure you learn some logic and some graph theory (yeah, lets try to avoid numbers as much as possible...) but it's not at a very high level. I seriously doubt CS professors are making significant contributions to mathematics.

"there is a fundamental disconnect between what you seem to think you should be learning and what is actually being taught"

Yeah. That was the point of what I wrote. There is also a disconnect between what academia wants to teach, and the tools that the students need to actually learn the concepts.

It's like teaching an Literature class without actually teaching people how to write. Code is our primary tool for expressing algorithms and concepts. If you hamper people's ability to write code, you are shooting yourself in the foot.


Why do you say CS takes one of the easiest fields in mathematics? People have been studying CS since long before we had computers and there are still famous open problems. If the math is so easy perhaps you'd like to give a proof for P=NP (or P!=NP) and claim your million dollars? Do you really think CS is all discrete math and graph theory? I don't go around claiming physics is easy because Newtonian mechanics are simple. I'd also argue that math only gets interesting once you take the numbers away. If all you're doing is number crunching you should probably just write some code instead. Computers are good at that sort of thing.

In my experience the real disconnect with CS degrees is expectations about what is and what should be taught. A lot of students want a vocational program. Universities end up teaching a hodgepodge of topics such as how to write code in a particular language, program design/software engineering, algorithms and complexity, discipline specific specialties (e.g. AI, OSes), computability, etc.

When I was in university I was taught how to debug my code with ddd and gdb. Was this a waste of my time because I no longer use them? Of course not. Everything I learned with them carries over to the debuggers I use everyday. My degree also covered several languages that I've never used outside of classes. The skills I learned learning them help me any time I want to pick up a new language and start coding.

Code is not a primary tool for expressing algorithms and concepts. Code is how I tell the computer what to do. If code was so good for communicating concepts and algorithms to other people I wouldn't need to write comments. I also haven't seen any algorithm books that include real code. The closest you'll get there is MIX. Good luck programming a real application in that.


Why the bitterness against CS?

As jarvic says,

> However, CS and software development are not the same thing.

A lot of schools offer a separate degree in Software engineering as opposed to CS (or at least separate tracks). So the distinction is pretty clear. Where I studied, there are separate degrees for CS and IT. You seem to be fudging the two. Do you expect physics majors to be mechanical engineers automatically?

CS degrees teach you algorithms, not how to write code in Java. Doesn't matter what IDE/language you're using, bubble sort is going to be slower than quicksort for large lists.

Yes, unfortunately most people who do CS end up becoming IT drones. That's the nature of the industry, not the CS program's fault. As a matter of interest, what do most physics majors become? I'm sure there aren't enough professorships available.

Or in the words of E.W.Dijkstra: “Computer science is no more about computers than astronomy is about telescopes.”


Sorry if I came off as bitter. I'm a couple of years out of college and honest I felt that the CS department was pretty self entitled and misguided.

"CS degrees teach you algorithms"

Yeah it does. And it doesn't feel like it's really enough. There isn't a feeling by the end that you've learned only a small fraction of the algorithms and I guess it felt like there wasn't all that much science.

I went to a school with a decent department (I think top 20 in the US) but I never felt like they were actually doing any real, interesting, groundbreaking research.

In contrast, the Physics department is doing all sorts of stuff. Quantum computing, CERN related stuff, building new astronomical sensors, string theory stuff etc. etc. They've got their own problems, but at least it's something you can point to.

Physics majors are sorta screwed. Physics is sorta the opposite of CS. When you're done you feel like you've only learned a tiny bit and that there is so much more to understand. If you're not trying to go to grad school, the department doesn't really care about you. You end up having no marketable skills and your selling point is "I-went-through-a-insanely-difficult-degree-and-i'm-good-at-problem-solving". Most people can sorta code in Python, and sorta can do some circuit stuff. It's pretty bad.

People end up teaching, or working in unrelated jobs. Or going to grad school in related disciplines. One of my buddies ended up picking grapes with illegal immigrants.


Disclaimer: I'm a CS PhD student that did pure math in undergrad, so I probably have a different perspective on these things than someone who did CS only.

CS, at a graduate level, is a much more varied field than most people, even CS undergrads, realize. In fact, a lot of the main research groups in my department (graphics, vision, image analysis, robotics, VR, visualization) are things most CS undergrads don't even get exposed to.

In math or physics, a lot of the undergrad courses are kind of basic, entry-level things that give you a feel for some of the different fields of research without going too far in depth. This isn't really the case with CS, because CS departments have to cater to several different audiences who tend to fall into one degree. They have to come up with a curriculum that balances programming/development concepts for people who want to be developers, networking/administration stuff for people who want to be IT (although this is more and more being moved into a separate program), and theory for people who actually want to be computer scientists. Even then, this theory is mostly relegated to computation theory, which is only a small part of what research computer scientists do.

Personally, I'm in medical image analysis, which is a field which is a collision between a ton of different areas, including physics, statistics, operations research[1], numerical analysis, image processing, differential geometry, and medicine. I don't know that I would consider any of those fields to be the "easiest branch of mathematics" (what does that even mean, anyway?)

This post has kind of gotten off the rails a little bit, so I'll end with this: I think, if you just look at a CS undergrad program and try to project that to what actual CS research is like, you end up getting a much narrower and incomplete picture than if you do the same thing to a math or physics department, partly because CS research tends to be much more interdisciplinary than others (I have many more collaborators in many more areas than some friends of mine in math/physics/biology do).

--

[1] http://en.wikipedia.org/wiki/Operations_research


The purpose of a CS course is not necessarily to teach people to become developers. Indeed it would be interesting to know what % of CS grads even work as programmers post graduation, probably less than HN would imply. Based on the people I keep in touch with from my graduation class, I would put it somewhere near 30%.


That's a common thing I didn't understand about my fellow CS majors. Many of them did not want to become developers and even took pride in the fact that they made it until the final exams without ever having to hand in a single programming assignment they did themselves.

Yes, these people have amazing careers in management and consulting ahead. Is this good for the industry? I doubt it.

For me, the purpose of a CS course is to make people reasonable at programming with an advanced knowledge about the remainder of the field. It's easier to teach a programmer formal CS methods than to teach a formal methods guy to program.


Funny, I always thought it the other way around. I struggle whenever I try to teach myself any formal/theoretical stuff because without the aid of a teacher checking my work I'm never sure if I really understand or not.

I never had a problem learning programming languages on my own by following examples though.


I think you'll find that in CS, there's often if not always a way to implement or approximate the theory in code. To me, that's the more interesting part, but it requires self-learning because there are very few teachers who can blend theory into practical terms. Like the original EE intuition examples, perhaps it's something you appreciate after 10 years in the field.


Ouch. Does that imply that finishing in the top 80% of your class then put you somewhere near the bottom 33% of developers?


You are assuming that the best CS majors must obviously want to be commercial developers.

People go to grad school in other disciplines (sciences, stats, etc.) which can take advantage of computation, for one thing.


Wouldn't that depend on the correlation between class grades and developer skill? I have no idea about that.


I'm curious - what do the remaining 70% do?


Various things, a common aspiration was to try and fast track to a management position at a consultancy or blue chip where things were are more about spreadsheets than code. Others do technical work, but not strictly development like Q&A or technical support.

Some went into academia and some did other things altogether, I know at least one person who became a tree surgeon and another who is a session musician.


Tech support and CS degree?!? It might just be me but I cant really see a person with a bachelor or masters in CS work in tech support.


There are more jobs available, especially in places without software companies. Or people take those jobs temporarily while they look for other jobs, get promoted and end up following a different career track.


If Q&A is QA then it should be considered development, in a specialized domain.


except you just click things on a GUI there.


I'm fond of developer-QA folks that automate most of their pointing and clicking. There is, absolutely, a skill in finding bugs by exploratory testing, but someone who can combine that with automating previous tests (thus building up a regression suite) in a way that's maintainable is a real keeper.


I think you mean "except when..." As others said, QA automation is a serious endeavor, and an important one.


Unless you're writing test automation.


The debugger should be a concept to be taught, just like the compiler, linker, interpreter, assembler and so on. There should be at least a few lab hours where students are given existing programs and need to go for a bug hunt. This should include strace and valgrind, two essential tools that will probably not be outdated for quite a while.

Now, where I disagree is the IDE part. I have nothing against IDEs per se and I think everyone should be able to choose the tools he can work with best once (s)he starts a professional career. (hence why I like the concept behind cmake btw.). But I think it's a bad idea to start off with this as a student. An IDE comes along as an non separatable entity and it's hard to distinguish between the essential components behind it if that's the only thing you practice developing with. Students should learn what those components do from scratch and that's easier when you can see them as Lego bricks in front of you. IMO it's also a more satisfying learning experience than having MS, the Eclipse Foundation or anyone else holding the hand for you when you do your baby steps.


I see this as completely backwards. Tools like cmake are often worse for beginners because they have no idea how to understand them, and it involves learning another new language in addition to whatever programming language they're learning. Looking at something like Xcode or Visual Studio or Eclipse, where they can see "these are the source files that comprise my project," is much easier.

I do agree that learning the difference between the editor, compiler, the linker, and various other pieces is important, but I think there are better ways to do it than trying to understand the unix build tools right off the bat.

FWIW, my first exposure to all of this was in high school where we were taught Turbo Pascal on DOS. It was essentially a text-based IDE, but the teacher still taught us about the editor, the compiler, the linker, etc. and it made perfect sense. When I went to use the unix tools in college, it was quite confusing.


I think it's important to hit the ground running using the best tools available.

"Students should learn what those components do from scratch"

Yeah, they will eventually. When they take a compilers class, they'll understand how the compiler works. It's okay to have that be "magic" in the mean time. They should be focusing on learning about OO principles, and data structures, and algorithms and not futzing with command line tools.


> They honest to god think CS is an actually hard science on par with math and physics.

Theoretical computer science is a branch of mathematics. Don't dismiss the whole field just because your university chooses to under represent the theoretical side.


Plenty of people still use ant. It's the path of least resistance where I work since we have a lot of tooling and preexisting code that use it.


On that note plenty of people still use svn for the same reasons.


As an update Gradle's easier to use Maven is gradually replacing Ant and has its sights on Make next. See Android for example.


It also gives a huge advantage to students who've been coding for a few years, especially if they've been involved in open source projects. I used source control tools for my programming classes at uni (not a CS major) and it made me 50% more productive, easily.


We didn't even learn what source control is, let alone a tool like Subversion. We used it because some of us were familiar with it, but tools in general weren't taught.


I was taught version control. In fact, university made very sure to emphasise proper source control in our second year group project, even to the extent of making one person in each group the "code librarian", responsible for making sure the code of the various members fitted together, was properly controlled, and safely backed up (which was me).

And of course the nightmare situation occurred. The project manager (read: no !*&^£ clue) "accidentally" deleted the entire source control repository. Reconstructed from backups from three hours previously, plus changes from peoples' working copies within half an hour.


Print statements for debugging is not necessarily a bad thing: http://www.scott-a-s.com/traces-vs-snapshots/


Agreed! In the same vein, I recently wrote about both debugging vs logging [1] and about session-based logging [2].

[1] http://henrikwarne.com/2014/01/01/finding-bugs-debugger-vers...

[2] http://henrikwarne.com/2014/01/21/session-based-logging/


Neat, we hit on many of the same points.


Yep!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: