People are increasingly turning to online resources to learn to code. However, despite their prevalence, it is still unclear how successful these resources are at teaching CS1 programming concepts. Using a pretest-posttest study design, we measured the performance of 60 novices before and after they used one of the following, randomly assigned learning activities: 1) complete a Python course on a website called Codecademy, 2) play through and finish a debugging game called Gidget, or 3) use Gidget's puzzle designer to write programs from scratch. The pre- and posttest exams consisted of 24 multiple choice questions that were selected and validated based on data from 1,494 crowdsourced respondents. All 60 of our novices across the three conditions did poorly on the exams overall in both the pre-tests and post-tests (e.g., the best median post-test score was 50% correct). However, those completing the Codecademy course and those playing through the Gidget game showed over a 100% increase in correct answers when comparing their post-test exam scores to their pretest exam scores. Those playing Gidget, however, achieved these same learning gains in half the time. This was in contrast to novices that used the puzzle designer, who did not show any measurable learning gains. All participants performed similarly within their own conditions, regardless of gender, age, or education. These findings suggest that discretionary online educational technologies can successfully teach novices introductory programming concepts (to a degree) within a few hours when explicitly guided by a curriculum.