I am not that person, but I can probably explain anyway.
The whole point of the web is to be a common runtime. You can use it from any browser and any device. You avoid the situation where you prefer browser X because it has some feature that is critical to you, but you have to use browser Y for some web sites. (So in the end, you'll use browser Y for all sites so that your user agent can actually be your agent by eg remembering your history/passwords. Or worse, you will be forced to use multiple browsers due to multiple incompatible websites.) You also avoid the situation where a new hardware platform or browser engine can never be released because there's no way to make it compatible with the current web.
In short, if a "web site" does not work in a strictly standards-compliant browser, then it's not part of the Web. It may be linked to from the web, and it may almost be part of the web, but it's really something else hanging off of it.
There are many such things. Flash apps are a common type. Heck, PDFs are too and until recently all videos were. There's nothing inherently wrong with them existing, as long as they aren't claiming to be part of the Web.
In this case, it may only be a QA shortcoming with respect to a 0.7% market share device. Inbox is not so benign, since the compatibility delay is a pretty strong indicator of corporate priority and policy.
And that's the danger, and the probable reason for the downvote. If you accept non-web things on the web, only thinking of the convenience and added functionality you gain (because your particular user agent is fine with it and makes it appear to be part of the web), then lockin will gradually set in, standards will lose their meaning, browser makers will start having to implement each other's half-baked experimental features that don't interoperate with everything else, cats and dogs will start living together, and there will be no reason not to gain the extra 10% in performance and functionality you get from a native app because your web sites don't run universally anymore anyway. So developers get to start maintaining half a dozen separate code bases because we've pissed away our opportunity to make a single codebase work everywhere. The blasé acceptance of that possibility is what earns downvotes.
The movement in this direction is already well under way on the mobile web, so I'm not just dreaming up imaginary hypothetical scaremongering.
Fitting to a statistical model superficially makes sense. But I think the details kill it.
The outcome you are measuring is the change in test score from before having a teacher and after. VAM attempts to statistically estimate the teacher's contribution to that change.
Presumably, the test is of something that theoretically the students will not know beforehand. Which means the teachers don't want students who study on their own (or participate in activities where that knowledge might be useful). And they don't want students who aren't going to learn it -- whoops, that was a leap, I meant to say who aren't going to test higher at the end. So you don't really want the top tier nor bottom tier coming into your class.
Nonspecific to VAM, but a result of standardized test results being used for anything meaningful to the teacher (salary, tenure, etc.) is that anything not on the test has an opportunity cost, and so will be omitted in favor of test prep. The more statistical validity that VAM has, the stronger this effect will be. If the teacher shows the students how to incorporate their new knowledge into a broader perspective, it may make the school's scores improve but it will screw over the next teacher in line (because the before test will be higher). So there's some peer pressure to make sure the students learn nothing that they're "supposed" to learn later.
If you consider a subject like math, what happens is that at some point many students fall behind. This makes the later topics much, much harder, because they build on what they never quite understood. A perfect teacher would figure out what balance of old and new material to give each individual student. That perfect teacher would score poorly on VAM compared to a teacher who crammed in test-specific mechanics and regurgitation, relying on dismal beginning test scores to make poor but not awful ending test scores look good. The system would gradually optimize for squeezing incremental gains out of improperly taught students.
And don't forget that the outcome is what's measured, and what's measured is crap. In football, you can look at a score (or just who won). Here, the structure is tuned to produce students who can do well on year-end tests but nothing else, certainly not on their ability to apply their knowledge to situations not likely to show up on a test.
Ok, this became more of a rant against standardized testing, but it just bothers me that adding statistical power magnifies the problems. You'd be better off throwing in a large random component, so that teachers' innate desires to teach well have a chance at winning out over gaming the system. Because even if your population of teachers is really conscientious, you're actively selecting for those most willing to play the game. And selection always wins in the end.
Your assuming the delta is based around just the prior test scores vs this one. aka old 10 new 15 or old 80 vs new 85 is the same improvement. However, statistically there is a tendency to regress toward the mean making simply staying at 80 end up as statistical progress. However, I suspect their using a flawed model that ignores the tendency for school districts to pack high preforming teachers on top of other high preforming teachers. To correct for this you need to look at what happens when someone moves from one district to another.
PS: There is a fair amount of momentum in many subjects so teachers can impact not just this years test results, but next years as well. In the end it's really difficult to come up with a high quality model and my guess is they simply did not bother.
Well it's not like teachers only stay in their position for a year. The framework could (and should?) keep on monitoring the progress of the students down the way and feed back to the teachers' rating until they graduate. That would also increase peer pressure and collaboration between teachers.
I think this is pretty common. In fact, this can happen even if neither browser changes at all. Your profile builds up and your history searches etc. slow down, so you switch browsers and get a clean profile to start with. Repeat. (I'm not saying that that is what you are experiencing, but I think it's a factor for every browser user.)
I don't want my bookmarking completely separated from browsers, but I would like it to sync to shared browser-portable cloud storage. I'd also like the distinction between tabs, bookmarks, and history to go away and be replaced by some categories that are more relevant to me.
I agree. IE kicked everyone else's butt first (in terms of market share), and the Web stagnated. Then Firefox kicked IE's butt, and the Web didn't stagnate quite as much, but let's be honest -- it stagnated. At the moment, Chrome is dominant but no browser is really trampling its competition, especially not in terms of overall quality. And the Web is improving faster than it ever has. (Well, not quite -- the "catch up" phases of the browser wars probably saw faster improvement while they were happening.)
Don't look at asm.js benchmarks if you're interested in real-world JS. Look at them if you want to know something about the performance of C++ code transpiled to run in a browser, which is an interesting thing to know. And it's getting to be more and more relevant these days. (Even "real-world JS" is going to start using asm.js libraries, I bet.)
But asm.js execution is very different from JS execution, even in browsers that don't have specialized asm.js paths. Executing regular JS is all about balancing compile time and garbage collection with code execution. asm.js barely uses GC, and allows lots of opportunities to cache compilation in ways that would be invalid for regular JS. So the whole space of tradeoffs is different.
I agree. If you accept the reasonable premise that incremental GC trades worse throughput for better latency, then splay-latency rewards low throughput.
That isn't as awful as it sounds, it's just that there's nothing in the benchmark that tells the JS engine that penalizing throughput is the right thing to do. It needs some kind of marker that we can agree means "even though it's the wrong thing to do given just the code that you're seeing, pretend like this is running in an environment where you should prioritize throughput below latency between these semi-arbitrary points." We are discussing perhaps treating Date.now or window.performance.now as meaning that, because if you're measuring the jitter between things, you'll be grabbing the current time at exactly those points where you're mimicking ending one turn and starting the next. But that's still not really correct, because you're also asserting that there would be zero idle time in between turns, which is generally not true in a real application.
The main difference from what is shown in those slides is that V8 uses a semispace collector. The SpiderMonkey collector just has a single nursery.
Jon Coppeard implemented a semispace collector for SpiderMonkey, but the added complexity made it a net loss in performance. So we scrapped it for now. It means we get a few objects unfairly tenured, but our measurements showed the actual number was pretty low and not worth the overhead.
It's totally workload dependent, and further GGC tuning (there's a lot to go!) may reverse that balance.
It has some impact on overall execution time (throughput), mostly through being able to bump allocate. The benchmark I used in the article shows a large gain, though the real-world effect is much less.
But GGC is mostly about reducing pause times (latency). (And yes, people have taken to calling that "jank". I still resist the term when I can, since it's no more precise than anything that came before but people seem to think it must mean something specific.)
The whole point of the web is to be a common runtime. You can use it from any browser and any device. You avoid the situation where you prefer browser X because it has some feature that is critical to you, but you have to use browser Y for some web sites. (So in the end, you'll use browser Y for all sites so that your user agent can actually be your agent by eg remembering your history/passwords. Or worse, you will be forced to use multiple browsers due to multiple incompatible websites.) You also avoid the situation where a new hardware platform or browser engine can never be released because there's no way to make it compatible with the current web.
In short, if a "web site" does not work in a strictly standards-compliant browser, then it's not part of the Web. It may be linked to from the web, and it may almost be part of the web, but it's really something else hanging off of it.
There are many such things. Flash apps are a common type. Heck, PDFs are too and until recently all videos were. There's nothing inherently wrong with them existing, as long as they aren't claiming to be part of the Web.
In this case, it may only be a QA shortcoming with respect to a 0.7% market share device. Inbox is not so benign, since the compatibility delay is a pretty strong indicator of corporate priority and policy.
And that's the danger, and the probable reason for the downvote. If you accept non-web things on the web, only thinking of the convenience and added functionality you gain (because your particular user agent is fine with it and makes it appear to be part of the web), then lockin will gradually set in, standards will lose their meaning, browser makers will start having to implement each other's half-baked experimental features that don't interoperate with everything else, cats and dogs will start living together, and there will be no reason not to gain the extra 10% in performance and functionality you get from a native app because your web sites don't run universally anymore anyway. So developers get to start maintaining half a dozen separate code bases because we've pissed away our opportunity to make a single codebase work everywhere. The blasé acceptance of that possibility is what earns downvotes.
The movement in this direction is already well under way on the mobile web, so I'm not just dreaming up imaginary hypothetical scaremongering.
(disclaimer: I am a Firefox platform developer)