Are Math Textbooks Written by People who Hate Math?

Now that I’m basically home-schooling my daughter due to The Lockdown, I’m realizing how ridiculous math textbooks and workbooks are. Who writes these things / creates these problem sets? Today’s homework assignment had these nuggets in it:

“Kelly subtracted 2.3 from 20 and got 17.7. Explain why this answer is reasonable.”

The obvious answer is “because it is correct.” But that would get the student zero points. The expected (I assume) answer is about number sense / estimation, e.g., “If I subtract 2 from 20 I get 18, but I have to subtract a little bit more, and 17.7 is a little bit less than 18, so 17.7 is a reasonable answer.” Now my issue with this problem is that the actual arithmetic is so simple that it is arguably easier to do just do it than it is to go the estimation route. The problem sets the students up for failure, and undercuts the point of the unit: that estimation is a valuable tool. A better problem would have used numbers with more digits to hint that the students were supposed to estimate the result instead of calculating it, and to show that estimation saves time and effort.

“At a local swim meet, the second-place swimmer of the 100-m freestyle had a time of 9.33 sec. …”

This one made me laugh out loud, and I’m not even a sports fan who follows swimming. But even I know that swimming is a lot slower than running, and upon checking, I found that the world record for the 100m freestyle is 46.91 seconds. Who was competing in this “local swim meet?” Aquaman? My issue here is that the problem creator failed to understand the reason for using this type of word problem: reinforcing the important notion that math is important in the real world. But by choosing these laughable numbers, the creator not only undercut that notion, but created exactly the opposite impression in the students: that math has no relationship to the real world.

And from today’s section of the textbook, this table:

LocationRainfall amount in a
typical year (in inches)
Macon, GA 45
Boise, ID 12.19
Caribou, ME 37.44
Springfield, MO 44.97

Followed by this question: “What is the typical yearly rainfall for all four cities?” The book expects 139.6 inches as the answer, but that answer makes no sense. Rainfall amounts measured in inches can not be added up between multiple locations, because they are ratios, specifically volume of rain per area. How is that supposed to work? Stacking the four cities on top of each other? As in the previous example, this problem undercuts the goal of showing that math has a relationship to the real world. These students, being in fifth grade, wouldn’t necessarily realize the issue with this problem, but it really makes me think whether the person creating this example has advanced beyond fifth grade. Or, even worse, if that person is actively trying to create the impression that math is just some numbers game that happens in a vacuum. If so, good job.

My daughter was actually stumped by this last one, having no idea what the book meant by “typical yearly rainfall for all four cities,” and I had to explain to her that the question makes no sense, and reassure her that math is important, even if the math textbook goes out of its way to teach the students that math is frustrating, incomprehensible, and has no point. Again, good job, textbook writers.

In violation of Betteridge’s Law, I will answer the question posed in this post’s headline with a resounding “YES!

Information Superhighway Robbery

So, apparently this is a business model: you trawl YouTube for videos with a decent number of views (not too many, mind you), uploaded by someone that is not a bona fide YouTube star or well-known personality, or part of some content or ad exchange network, and file copyright claims on those videos. But you’re nice about it. You don’t threaten to take down those videos right away, you just give a friendly heads-up that some of the content in those videos is owned by you, and that you are therefore entitled to monetize those videos on behalf of (and instead of) the uploader. No big deal, it’s only fair, right?

Well, granted, it can be. YouTube is obviously a cesspool of blatant and gleeful copyright infringement. I especially like those uploaders who are rather clueless about copyright, and think it’s perfectly fine to rip off and upload someone else’s work, be it a music video or TV show episode or whole movie, as long as they put a disclaimer like “uploaded under fair use” or “no copyright infringement intended” into the video description. I very especially like the second “excuse,” because the cognitive dissonance is so delicious. “I just threw a well-aimed brick through your window, but I totally didn’t intend to do that!” Sure you didn’t. I have a few semi-popular videos on YouTube myself, and it grinds my gears if someone else re-uploads them, in lousy quality and with ads plastered all over. There was one case early on where a re-uploaded video got significantly more views and discussion than my original, and I had to go over there and answer questions. Wasn’t cool.

Anyway, back to topic. While there needs to be some mechanism for copyright holders to assert their rights, the current mechanism seems to be skewed towards appeasing “big content providers,” and seems open to abuse by, well, scum. For the former, exhibit A: “Sony Filed a Copyright Claim Against the Stock Video I Licensed to Them.” There’s really nothing I can add to that except that this is an instance where someone’s livelihood was seriously messed with.

As for (likely) abuse, last night I noticed a Content ID copyright claim on one of my aforementioned semi-popular videos: Continue reading

Here we go again with Apple’s holography patent

I just found an article about my 3D Video Capture with Three Kinects video on Discovery News (which is great!), but then I found Figure 1 in the “Related Gallery.” Oh, and they also had a link to another article titled “Virtual Reality Sex Game Set To Stimulate” right in the middle of my article, but you learn to take that kind of thing in stride.

Figure 1: Image in the “related gallery” on Discovery News. Original caption: “Apple has filed a patent for a holographic phone, a concept that sounds absolutely cool. We can’t wait. But what would it look like? A video created by animator Mike Ko, who has made animations for Google, Nike, Toyota, and NASCAR, gives us an idea. Check it out here”

Nope. Nope nope nope no. Where do I start? No, Apple has not filed a patent for a holographic phone. And even if Apple had, this is not what it would look like. I don’t want to rag on Mike Ko, the animator who created the concept video (watch it here, it’s beautiful). It’s just that this is not how holograms work. See Figure 2 for a very crude Photoshop (well, Gimp) job on what this would look like if such holographic screens really existed, and Figure 4 for an even cruder job of what the thing Apple actually patented would look like, if they were audacious enough to put it into an iPhone. Continue reading

Apple Patents Holographic Projector (no, not quite)

About once a day I check out this blog’s access statistics, and specifically the search terms that brought viewers to it (that’s how I found out that I’m the authority on the Oculus Rift being garbage). It’s often surprising, and often leads to new (new to me, at least) discoveries. Following one such search term, today I learned that Apple was awarded a patent for interactive holographic display technology. Well, OK, strike that. Today I learned that, apparently, reading an article is not a necessary condition for reblogging it — Apple wasn’t awarded a patent, but a patent application that Apple filed 18 months ago was published recently, according to standard procedure.

But that aside, what’s in the patent? The main figure in the application (see Figure 1) should already clue you in, if you read my pair of posts about the thankfully failed Holovision Kickstarter project. It’s a volumetric display of some unspecified sort (maybe a non-linear crystal? Or, if that fails, a rotating 2D display? Or “other 3D display technology?” Sure, why be specific! It’s only a patent! I suggest adding “holomatter” or “mass effect field” to the list, just to be sure.), placed inside a double parabolic mirror to create a real image of the volumetric display floating in air above the display assembly. Or, in other words, Project Vermeer. Now, I’m not a patent lawyer, but how Apple continues to file patents on the patently trivial (rounded corners, anyone?), or some exact thing that was shown by Microsoft in 2011, about a year before Apple’s patent was filed, is beyond me.

Figure 1: Main image from Apple’s patent application, showing the unspecified 3D image source (24) located inside the double-parabolic mirror, and the real 3D image of same (32) floating above the mirror. There is also some unspecified optical sensor (16) that may or may not let the user interact with the real 3D image in some unspecified way.

Continue reading

Small Correction to Rift’s Projection Matrix

In a previous post, I looked at the Oculus Rift’s internal projection in detail, and did some analysis of how stereo rendering setup is explained in the Rift SDK’s documentation. Looking at that again, I noticed something strange.

In the other post, I simplified the Rift’s projection matrix as presented in the SDK documentation to

P = \begin{pmatrix} \frac{2 \cdot \mathrm{EyeToScreenDistance}}{\mathrm{HScreenSize} / 2} & 0 & 0 & 0 \\ 0 & \frac{2 \cdot \mathrm{EyeToScreenDistance}}{\mathrm{VScreenSize}} & 0 & 0 \\ 0 & 0 & \frac{z_\mathrm{far}}{z_\mathrm{near} - z_\mathrm{far}} & \frac{z_\mathrm{far} \cdot z_\mathrm{near}}{z_\mathrm{near} - z_\mathrm{far}} \\ 0 & 0 & -1 & 0 \end{pmatrix}

which, to those in the know, doesn’t look like a regular OpenGL projection matrix, such as created by glFrustum(…). More precisely, the third row of P is off. The third-column entry should be \frac{z_\mathrm{near} + z_\mathrm{far}}{z_\mathrm{near} - z_\mathrm{far}} instead of \frac{z_\mathrm{far}}{z_\mathrm{near} - z_\mathrm{far}}, and the fourth-column entry should be 2 \cdot \frac{z_\mathrm{far} \cdot z_\mathrm{near}}{z_\mathrm{near} - z_\mathrm{far}} instead of \frac{z_\mathrm{far} \cdot z_\mathrm{near}}{z_\mathrm{near} - z_\mathrm{far}}. To clarify, I didn’t make a mistake in the derivation; the matrix’s third row is the same in the SDK documentation.

What’s the difference? It’s subtle. Changing the third row of the projection matrix doesn’t change where pixels end up on the screen (that’s the good news). It only changes the z, or depth, value assigned to those pixels. In a standard OpenGL frustum matrix, 3D points on the near plane get a depth value of 1.0, and those on the far plane get a depth value of -1.0. The 3D clipping operation that’s applied to any triangle after projection uses those depth values to cut off geometry outside the view frustum, and the viewport projection after that will map the [-1.0, 1.0] depth range to [0, 1] for z-buffer hidden surface removal.

Using a projection matrix as presented in the previous post, or in the SDK documentation, will still assign a depth value of -1.0 to points on the far plane, but a depth value of 0.0 to points on the (nominal) near plane. Meaning that the near plane distance given as parameter to the matrix is not the actual near plane distance used by clipping and z buffering, which might lead to some geometry appearing in the view that shouldn’t, and a loss of resolution in the z buffer because only half the value range is used.

I’m assuming that this is just a typo in the Oculus SDK documentation, and that the library code does the right thing (I haven’t looked).

Oh, right, so the fixed projection matrix, for those working along, is

P = \begin{pmatrix} \frac{2 \cdot \mathrm{EyeToScreenDistance}}{\mathrm{HScreenSize} / 2} & 0 & 0 & 0 \\ 0 & \frac{2 \cdot \mathrm{EyeToScreenDistance}}{\mathrm{VScreenSize}} & 0 & 0 \\ 0 & 0 & \frac{z_\mathrm{near} + z_\mathrm{far}}{z_\mathrm{near} - z_\mathrm{far}} & 2 \cdot \frac{z_\mathrm{far} \cdot z_\mathrm{near}}{z_\mathrm{near} - z_\mathrm{far}} \\ 0 & 0 & -1 & 0 \end{pmatrix}

Getting Mail from WordPress on Linux

I’ve mentioned previously that I’m running WordPress on my own web server on top of Linux, and that it took me some digging to get it to play nicely with SELinux. Turns out I don’t learn from old mistakes.

WordPress has this nice feature where it sends notifications about new comments, and comments held for moderation, to the admin account’s email address. Well, that never worked for me. That’s a problem: while it’s easy to regularly check the queue of held comments to approve those that are legit, new comments from readers who previously had a comment approved are not held, and can get lost quite easily. I’ve recently found a bunch that I should have answered months ago, but I never saw them.

And as before, the solution is easy once you know what you’re looking for: it’s SELinux again. In my sendmail log file, I found a bunch of these error messages:

NOQUEUE: SYSERR(apache): /etc/mail/sendmail.cf: line 0:
cannot open: Permission denied

Upon checking that file and finding it had proper permissions (read/write for user, read for group and other), I figured there would be some secret SELinux context that needed to be applied, and wondered why it wasn’t configured right by default. Turns out it’s different. SELinux also maintains a set of global flags, and there is one special flag that allows an HTTP server access to sendmail. Seems rather specific, that.

Anyway, to check whether apache (or any other web server) is allowed to send mail via sendmail, run

$ getsebool httpd_can_sendmail

which will reply either “on” or “off” (mine was “off,” big surprise). Then, to enable mail, run

$ setsebool httpd_can_sendmail 1

and voila. Now I’m getting email about comments, and about new users registering for my blog. Yay. As it turns out, there are a lot of new users registering for my blog. After deleting several thousand bogus ones, I’ve disabled user registration for now. Turns out there wasn’t a single registered user that looked legit. Oh well.

This probably means readers who tried to sign up for email notifications about new posts or comment replies didn’t get any either, but nobody ever complained, so I’m not sure. Anyway, it should work now.

Holovision revisioned

Boy, do I hate being wrong. And I was very wrong about the Holovision — A Life Size Hologram Kickstarter project (don’t bother; page down permanently) I talked about in a previous post. Why was I wrong? Because I gave those people way too much credit. I questioned their claims of life-size holograms, I questioned their PR material (because it helped me address a lingering point), but I didn’t question their most basic claim: 3D. I guess I’m too gullible.

Continue reading

The Holovision Kickstarter “scam”

Update: Please tear your eyes away from the blue lady and also read this follow-up post. It turns out things are worse than I thought. Now back to your regularly scheduled entertainment.

I somehow missed this when it was hot a few weeks or so ago, but I just found out about an interesting Kickstarter project: HOLOVISION — A Life Size Hologram. Don’t bother clicking the link, the project page has been taken down following a DMCA complaint and might not ever be up again.

Why do I think it’s worth talking about? Because, while there is an actual design for something called Holovision, and that design is theoretically feasible, and possibly even practical, the public’s impression of the product advertised on Kickstarter is decidedly not. The concept imagery associated with the Kickstarter project presents this feasible technology in a way that (intentionally?) taps into people’s misconceptions about holograms (and I’m talking about the “real” kind of holograms, those involving lasers and mirrors and beam splitters). In other words, it might not be a scam per se, and it might even be unintentional, but it is definitely creating a false impression that might lead to very disappointed backers.

Figure 1: This image is a blatant lie.

Continue reading

Is VR dead?

No, and it doesn’t even smell funny.

But let’s back up a bit. When it comes to VR, there are three prevalent opinions:

  1. It’s a dead technology. It had its day in the early nineties, and there hasn’t been anything new since. After all, the CAVE was invented in ’91 and is basically still the same, and head-mounted displays have been around even longer.
  2. It hasn’t been born yet. But maybe if we wait 10 more years, and there are some significant breakthroughs in display and computer technology, it might become interesting or feasible.
  3. It’s fringe technology. Some weirdos keep picking at it, but it hasn’t ever led to anything interesting or useful, and never will.

Continue reading

GPU performance: Nvidia Quadro vs Nvidia GeForce

One of the mysteries of the modern age is the existence of two distinct lines of graphics cards by the two big manufacturers, Nvidia and ATI/AMD. There are gamer-level cards, and professional-level cards. What are their differences? Obviously, gamer-level cards are cheap, because the companies face stiff competition from each other, and want to sell as many of them as possible to make a profit. So, why are professional-level cards so much more expensive? For comparison, an “entry-level” $700 Quadro 4000 is significantly slower than a $530 high-end GeForce GTX 680, at least according to my measurements using several Vrui applications, and the closest performance-equivalent to a GeForce GTX 680 I could find was a Quadro 6000 for a whopping $3660. Granted, the Quadro 6000 has 6GB of video RAM to the GeForce’s 2GB, but that doesn’t explain the difference.

Continue reading