putting the science in computer science

Download Putting the science in computer science

If you can't read please download the document

Upload: felienne-hermans

Post on 09-May-2015

13.024 views

Category:

Engineering


6 download

DESCRIPTION

Programmers love science! At least, so they say. Because when it comes to the ‘science’ of developing code, the most used tool is brutal debate. Vim versus emacs, static versus dynamic typing, Java versus C#, this can go on for hours at end. In this session, software engineering professor Felienne Hermans will present the latest research in software engineering that tries to understand and explain what programming methods, languages and tools are best suited for different types of development.

TRANSCRIPT

  • 1.Felienne Delft University of Technology Putting the science in computer science This slidedeck is about the science in computer science

2. Felienne Delft University of Technology Putting the science in computer science This slidedeck is about the science in computer science 3. We all love science, right? 4. We love crazy science like this http://the-toast.net/2014/02/06/linguist-explains-grammar- doge-wow/ 5. We love crazy science like this And this http://allthingslinguistic.com/post/56280475132/i-can-has- thesis-a-linguistic-analysis-of-lolspeak 6. What is science? 7. So, with science, we should be able to answer questions about the universe, like this one. 8. Youd expect computer scientists to somewhat respectfully debate this. 9. You get interpreted >> compiled JavaScript 4 ever! Pure is the only true path C++ is for real coders PHP sucks Pascal is very elegant Youd expect computer scientists to somewhat respectfully debate this. Unfortunately, reality is more like this. 10. State of the debate is not so scientific 11. But, some people are trying! In this slidedeck Ill highlight some of the interesting results those people have found so far. 12. Researchers at Berkeley have conducted a very exptensive survey on programming language factors. http://www.eecs.berkeley.edu/~lmeyerov/projects/socioplt/ viz/index.html 13. Researchers at Berkeley have conducted a very extensive survey on programming language factors. They collected 13.000 (!) responses, and their entire dataset is explorable online http://www.eecs.berkeley.edu/~lmeyerov/projects/socioplt/ viz/index.html 14. They founds loads of interesting facts, I really encourage you to have a look at their OOPSLA 13 paper (Empirical Analysis of Programming Language Adoption) My favorite is this graph, factors for choosing a particular language. http://www.eecs.berkeley.edu/~lmeyerov/projects/socioplt/ viz/index.html 15. They founds loads of interesting facts, I really encourage you to have a look at their OOPSLA 13 paper (Empirical Analysis of Programming Language Adoption) My favorite is this graph, factors for choosing a particular language. Notice that the first factor that has something to do with the language is on the 6th place. http://www.eecs.berkeley.edu/~lmeyerov/projects/socioplt/ viz/index.html 16. They founds loads of interesting facts, I really encourage you to have a look at their OOPSLA 13 paper (Empirical Analysis of Programming Language Adoption) My favorite is this graph, factors for choosing a particular language. Notice that the first factor that has something to do with the language is on the 6th place. Two other languages are on the 8th and 12th place. http://www.eecs.berkeley.edu/~lmeyerov/projects/socioplt/ viz/index.html 17. What correlates most with enjoyment? Meyerovich and Rabkin also looked into what makes programmers happy. Want to guess? 18. Meyerovich and Rabkin also looked into what makes programmers happy. Want to guess? Its expressiveness. That correlates with enjoyment most. 19. Could we measure it? Meyerovich and Rabkin also looked into what makes programmers happy. Want to guess? Its expressiveness. That correlates with enjoyment most. So what language is most expressive? Is there a way to measure this? 20. Danny Berkholz, researcher at RedMonk came up with a way to do this. He compared commit sizes of different projects (from Ohloh, covering 7.5 million project-months) His assumption is that a commit has more or less the same value in terms of functionality over different languages. http://redmonk.com/dberkholz/2013/03/25/programming-languages-ranked-by-expressiveness/ 21. In the graph, the thick line indicates the median, the box represents 25 and 75% of the values and the lines 10 and 90%. http://redmonk.com/dberkholz/2013/03/25/programming-languages-ranked-by-expressiveness/ 22. In the graph, the thick line indicates the median, the box represents 25 and 75% of the values and the lines 10 and 90%. Lets have a look at what language goes where! http://redmonk.com/dberkholz/2013/03/25/programming-languages-ranked-by-expressiveness/ 23. Loads of interesting things to see here. For instance, all the popular languages (in red) are on the low end of expressiveness. This somehow corroborates Meyerovich findings: while programmers enjoy expressiveness, it seems not to be a factor for picking a language. 24. Loads of interesting things to see here. For instance, all the popular languages (in red) are on the low end of expressiveness. This somehow corroborates Meyerovich findings: while programmers enjoy expressiveness, it seems not to be a factor for picking a language. Interesting is also the huge difference between CoffeeScipt and JavaScript, while this might be due to the fact that CoffeeScript is young and commits are thus quite clean. 25. Loads of interesting things to see here. For instance, all the popular languages (in red) are on the low end of expressiveness. This somehow corroborates Meyerovich findings: while programmers enjoy expressiveness, it seems not to be a factor for picking a language. Interesting is also the huge difference between CoffeeScipt and JavaScript, while this might be due to the fact that CoffeeScript is young and commits are thus quite clean. Finally, unsurprising, functional (Haskell, F#, Lisps) = expressiveness. 26. Another interesting fact came out of Meyerovich and Rabkins study. 27. Could we measure it? 28. Stefan Hanenberg Static versus dynamic, does it really matter? Lets experiment! Stefan Hanenberg tried to measure whether static typing has any benefits over dynamic typing. 29. Two groups Stefan Hanenberg tried to measure whether static typing has any benefits over dynamic typing. He divided a group of students into two groups, one with a type system and one without, and had them perform small maintainability tasks. Lets summarize his results... Star Wars style! 30. On the left, we have the lover of dynamically typed stuff. He has gone through some type casts in his life and he is sick of it! On the right is his static opponent. Lets see how this turns out. 31. But what about type casting? One of the arguments that dynamic proponents have, is that type casting is annoying and time consuming. Hanenbergs experiment showed: 32. It does not really matter But what about type casting? One of the arguments that dynamic proponents have, is that type casting is annoying and time consuming. Hanenbergs experiment showed: It does not really matter. For programs over 10 LOC, you are not slower if you have to do type casting. 33. Im sure I can fix type errors just as quickly 34. Im sure I can fix type errors just as quickly Not even close 35. Im sure I can fix type errors just as quickly Not even close The differences are HUGE! Blue bar = Groovy, Green = Java Vertical axis = time 36. Im sure I can fix type errors just as quickly Not even close The differences are HUGE! Blue bar = Groovy, Green = Java Vertical axis = time In some cases, the run-time errors occurred at the same line where the compiler found a type error. 37. But my dynamically typed APIs, they must be quicker to use 38. But my dynamically typed APIs, they must be quicker to use No 39. Ill document the APIs! 40. Ill document the APIs! Wont help! Turns out, type names help more than documentation. 41. Ill use a better IDE! 42. Wont help! Ill use a better IDE! 43. Stefan Hanenberg It looks like (Java- like) static type systems really help in development! While more research is needed, we might conclude this. 44. Do design patterns work? Lets tackle another one! 45. Walter Tichy Lets tackle another one! Walter Tichy wanted to know whether design patterns really help development. He started small, with a group of students, testing whether giving them info on design patterns helped understandability. 46. Again two groups, but different Lets tackle another one! Walter Tichy wanted to know whether design patterns really help development. He started small, with a group of students, testing whether giving them info on design patterns helped understandability. Again, students were divided into two groups, but setup was a bit different. 47. There were two programs used (PH and AOT) and some students got with the version documentation first and without second. On different programs obviously, otherwise the students would know the patterns were there in the second test. Prechelt et al, 2002. Two controlled experiments assessing the usefulness of design pattern documentation in program maintenance. TSE 28(6): 595-606 48. The results clearly show that knowing a pattern is there, helps performing maintenance tasks. However, it was not entirely fair to measure time, as not all solutions were correct. If you look at the best solutions, you see a clear difference in favor of the documented version. Ticky updated the study design in the next version, where only correct solutions were taken into account. 49. Again two groups, but again different The results clearly show that knowing a pattern is there, helps performing maintenance tasks. However, it was not entirely fair to measure time, as not all solutions were correct. If you look at the best solutions, you see a clear difference in favor of the documented version. Ticky updated the study design in the next version, where only correct solutions were taken into account. 50. In this next version, professionals were used instead of students. Furthermore, the setup was different. The same experiment was done twice, first without participants knowing patterns. Then, they did a course and after that again they did a test. 51. Again, results showed that version with patterns turned out to be easier to modify. 52. Again, results showed that version with patterns tuned out to be easier to modify. For some patterns though (like Observer) the differences in pre- and posttest were really big. For these patterns, the course made a big difference. In other words: patterns only help if you understand them. 53. They work! 54. It has to do with how the human brain works 55. Long term memory Short term memory The human memory works a bit like a computer. Long term memory can save stuff for a long time, but it is slow. Short term memory is quick, but can only retain about 7 items. Using patterns, you only use 1 slot this is an observer pattern rather than multiple for this class is notified when something happens in this other class "The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information, George Miller, 1956 56. They work! Experiments show it, cognitive science helps us understand why 57. Does Linus Law hold? 58. Given enough eyeballs, all bugs are shallow Does Linus Law hold? 59. Appearently, the law is not so universal, we know that code reviews are hard to do right for larger pieces of software. 60. Not impressed? Appearently, the law is not so universal, we know that code reviews are hard to do right for larger pieces of software. Yeah, a tweet is not exactly science. Dont worry, I have some proof too. 61. Researcher at Microsoft research published a study in which they connected the number of bugs in the release of Vista (gathered through bug report) with organizational metrics, like the number of people that worked on a particular binary. Nagappan et al, 2008. The Influence of Organizational Structure On Software Quality: An Empirical Case Study, ICSE 2008 62. They found that the opposite of Linus law is true. The more people work on a piece of code, the more error-prone it is! Nagappan et al, 2008. The Influence of Organizational Structure On Software Quality: An Empirical Case Study, ICSE 2008 More touchers -> more bugs 63. More tied to bugs than any code metric They found that the opposite of Linus law is true. The more people work on a piece of code, the more error-prone it is! These, and other, organizational big are more tied to quality than any other code metric! Nagappan et al, 2008. The Influence of Organizational Structure On Software Quality: An Empirical Case Study, ICSE 2008 64. They found that the opposite of Linus law is true. The more people work on a piece of code, the more error-prone it is! These, and other, organizational big are more tied to quality than any other code metric! This means that if you want to predict future defects, the best you can do is look at the company! Might be me, but I think that is surprising. Nagappan et al, 2008. The Influence of Organizational Structure On Software Quality: An Empirical Case Study, ICSE 2008 65. Putting the science in computer science Felienne Delft University of Technology Thats it! I hope you got a sense for the usefullness of software engineering research in practice. If you want to keep up, follow my blog where I regularly blog about the newest SE research.