Why Science Practices and the Nature of Science Matter

You can tell that I’m a major science education geek because I spent a good part of my recent vacation at the beach reading a book about science: The Golem, by Harry Collins and Trevor Pinch. There’s a copy of the book available online (jump to the brief conclusion if you want to get the essence of the book).

Collins and Pinch make an interesting argument about how “real science” is done. Because science on the cutting edge is so messy, in many cases experiment and evidence alone cannot resolve the debate. In these cases, “knowing more science” is not necessarily helpful, because the available facts can be interpreted in different ways. Similarly, in public debates involving science, simple knowledge of science content is not enough to be an informed citizen. Citizens must also understand how science is done- in the terms of the Next Generation Science Standards, they need to understand science practices including the nature of science. This is the only way to make sense not only of how some scientific ideas are well-established facts, but also why scientists are unable to come to consensus on some important ideas on the cutting edge. Collins and Pinch argue that it is often in these areas, where there is not a clear scientific consensus, where citizens must participate in public decision-making.

Ironically it may also be the lack of understanding of science practices and the nature of science that have resulted in current anti-science attitudes. Scientists are often portrayed in popular discussion as belonging to one of two categories: Gods (ie, all knowing) or charlatans (knowing nothing). The reality of course is neither- scientists are experts, similar in this way to other experts such as plumbers. To quote Collins and Pinch:

Plumbers are not perfect- far from it- but society is not beset with anti-plumbers because anti-plumbing is not a choice available to us. It is not a choice because the counter-choice, plumbing as immaculately conceived, is likewise not on widespread offer.

It may be the case that engaging in science practices and seeing the nature of science for themselves may not be the most efficient or useful way for students to learn science content. But while ultimately students need to understand the key facts of science, they also need to understand how scientific knowledge is constructed- for example, how the results of experiments must always be interpreted even in order to get the “right” answer (ie, achieve a result that is already well-understood to be correct). The only way to achieve this aim is to have students engage in real science themselves.

Posted in Education, Science | Tagged , , , , | Leave a comment

What is STEM Education, Anyway?

Over on the Navigator blog, I’ve got a new post up: What is STEM education, Anyway? Please check it out if you’re interested!

Posted in Education, Science | Tagged , | Leave a comment

The break-things-into-bits mistake we have been making in education for centuries – happening today with standards

Been thinking about this as a pitfall to avoid as we work on new curriculum for the Next Generation Science Standards.

Granted, and...

In the just-released Math Publisher’s Criteria document on the Common Core Standards, the authors say this about (bad) curricular decision-making:

“’Fragmenting the Standards into individual standards, or individual bits of standards … produces a sum of parts that is decidedly less than the whole’ (Appendix from the K-8 Publishers’ Criteria). Breaking down standards poses a threat to the focus and coherence of the Standards. It is sometimes helpful or necessary to isolate a part of a compound standard for instruction or assessment, but not always, and not at the expense of the Standards as a whole.

“A drive to break the Standards down into ‘microstandards’ risks making the checklist mentality even worse than it is today. Microstandards would also make it easier for microtasks and microlessons to drive out extended tasks and deep learning. Finally, microstandards could allow for micromanagement: Picture teachers and students being held accountable for ever more…

View original post 1,331 more words

Posted in Education | Leave a comment

Fordham’s claim of NGSS/CCSSM “misalignment” doesn’t stand up to scrutiny

This week the Thomas Fordham Institute released a report claiming “alignment glitches” between the Next Generation Science Standards and the Common Core State  Standards for Mathematics. Fordham’s original report on the standards suffered from a number of issues, but many of those boiled down to differences in educational philosophies. Unfortunately, this report suffers from far more serious problems.

The Fordham report notes three supposed issues with the math in the NGSS:

  • Misalignment: situations where the math required by the NGSS exceeds the math required by the Common Core for that grade level.
  • Missed opportunities to include more math.
  • Superficial connections between the science and math content.

I’m going to focus on the “misalignment” criticism because that is clearly the most serious of the three. Nearly all of the examples they give of this supposed shortcoming don’t hold water. Let’s go through them one by one.

Supposed misalignment #1: 

4-PS3-1: Use evidence to construct an explanation relating the speed of an object to the energy of that object. [Assessment Boundary: Assessment does not include quantitative measures of changes in the speed of an object or on any precise or quantitative definition of energy.]

The Fordham report claims that students would need to use quadratic functions to meet this standard, which would be well beyond 4th grade math. It also claims that without math, students cannot construct an explanation.

This is nonsense. The relevant disciplinary core idea here is that “the faster a given object is moving, the more energy it possesses.” There are numerous non-quantitative ways a 4th grade student could explain how this is true. For example, cars traveling at faster speeds in an accident obviously suffer greater damage. A faster bowling ball will knock down pins more easily than a slower bowling ball. An experiment could easily be done by rolling or launching an object at varying speeds and observing increasingly greater effects for higher speeds.

Most importantly, the NGSS does not attempt to align this standard to Common Core math. So this can in no way be interpreted as an “alignment glitch.”

Supposed misalignment #2: 

4-PS4-1: Develop a model of waves to describe patterns in terms of amplitude and wavelength and that waves can cause objects to move. [Clarification Statement: Examples of models could include diagrams, analogies, and physical models using wire to illustrate wavelength and amplitude of waves.] [Assessment Boundary: Assessment does not include interference effects, electromagnetic waves, non-periodic waves, or quantitative models of amplitude and wavelength.]

This one is just bizarre to me- the clarification statement specifically describes the kinds of models that are meant here, and yet the Fordham report goes off about trigonometric functions? The NGSS links this to the CCSSM math practice standard “Model with mathematics.” In this case, Appendix L extends this to identify a CCSS standard related to drawing points, lines, and line segments, all of which seems to me to be a perfect example of “modeling with mathematics” on a diagram to illustrate the different properties of a wave  (including amplitude & wavelength).

Supposed misalignment #3: 

MS-PS4-1: Use mathematical representations to describe a simple model for waves that includes how the amplitude of a wave is related to the energy in a wave. [Clarification Statement: Emphasis is on describing waves with both qualitative and quantitative thinking.] [Assessment Boundary: Assessment does not include electromagnetic waves and is limited to standard repeating waves.]

Again, the Fordham report claims that this requires “trigonometric and quadratic functions.” This is simply untrue. Here’s the relevant physical principle: the energy of a wave is proportional to the square of the wave’s amplitude. To understand this, students simply need to be able to do something like “write and evaluate numerical expressions involving whole number exponents” which coincidentally is a 6th grade math expectation in the CCSS. Although this mathematical connection is not explicitly made in the NGSS, the connections regarding ratios (for example, ratios related to number of crests/troughs within a specified length of the wave) and giving examples of non-linear functions all seem very relevant here.

Supposed misalignment #4:

MS-PS2-2: Plan an investigation to provide evidence that the change in an object’s motion depends on the sum of the forces on the object and the mass of the object. [Clarification Statement: Emphasis is on balanced (Newton’s First Law) and unbalanced forces in a system, qualitative comparisons of forces, mass and changes in motion (Newton’s Second Law), frame of reference, and specification of units.] [Assessment Boundary: Assessment is limited to forces and changes in motion in one-dimension in an inertial reference frame and to change in one variable at a time. Assessment does not include the use of trigonometry.]

The Fordham report claims that the standard requires the use of vectors, which is not grade appropriate. After much discussion in the report, their beef seems to be simply that the PE doesn’t read “Plan an investigation of one-dimensional motion to provide evidence that the change in an object’s motion depends on the sum of the forces on the object and the mass of the object.” But that is clearly described in the assessment boundary: the motion is in a single direction. And regardless, as the Fordham report says, meeting this PE for 1-dimensional motion is completely grade-appropriate, so this is not a case of any meaningful “misalignment” but nit-picking over formatting and wording.

Supposed misalignment #5:

MS-ESS2-6: Develop and use a model to describe how unequal heating and rotation of the Earth cause patterns of atmospheric and oceanic circulation that determine regional climates. [Clarification Statement: Emphasis is on how patterns vary by latitude, altitude, and geographic land distribution. Emphasis of atmospheric circulation is on the sunlight-driven latitudinal banding, the Coriolis effect, and resulting prevailing winds; emphasis of ocean circulation is on the transfer of heat by the global ocean convection cycle, which is constrained by the Coriolis effect and the outlines of continents. Examples of models can be diagrams, maps and globes, or digital representations.] [Assessment Boundary: Assessment does not include the dynamics of the Coriolis effect.]

This one is perhaps the most strange of all to me. The Fordham report seems to be stuck in the viewpoint that the “models” being referred to here are the kinds of climate models that professional scientists use, despite the explicit statement “Examples of models can be diagrams, maps and globes, or digital representations.” All these are obvious tools that are used to teach and understand this concept at the middle-school level. Most bizarrely, there is no attempt in either the NGSS or Appendix L to tie this to CCSS math at all! Look at the CCSS connections for this standard–  MS-ESS2-6 appears only in the ELA connections. The misalignment here seems to be purely in the imaginations of the Fordham authors.

Supposed misalignment #6:

HS-ESS1-4: Use mathematical or computational representations to predict the motion of orbiting objects in the solar system.[Clarification Statement: Emphasis is on Newtonian gravitational laws governing orbital motions, which apply to human-made satellites as well as planets and moons.] [Assessment Boundary: Mathematical representations for the gravitational attraction of bodies and Kepler’s Laws of orbital motions should not deal with more than two bodies, nor involve calculus.]

Apparently this “draws upon rather serious college-level mathematics” according to Fordham. Or, maybe it means Newton’s law of universal gravitation and Kepler’s laws. Because after all, those are both mentioned in the additional information that goes along with the PE, and those are both standard high school science expectations. Fordham’s claim that there is “not much left in the solar system” if you limit to two bodies at a time discredits the fundamental importance and validity of these laws and I have to assume is intended more for laughs than as a serious critique.

Supposed misalignment #7:

HS-ESS2-6: Develop a quantitative model to describe the cycling of carbon among the hydrosphere, atmosphere, geosphere, and biosphere. [Clarification Statement: Emphasis is on modeling biogeochemical cycles that include the cycling of carbon through the ocean, atmosphere, soil, and biosphere (including humans), providing the foundation for living organisms.]

The Fordham authors seem to be hung up on the idea that a “quantitative model” used by grade school students must be the same as the type of model used by professional scientists, as their argument is that this kind of model is inappropriate for high school students. But there are many other kinds of models. Here’s an example of an end product of an assessment of this PE as I’d envision it, and as I’d guess any high school Earth Science teacher would envision it as well. This is a quantitative model that would require no mathematics a high school student couldn’t easily handle. A simple spreadsheet model would be another potential example. Look at the CCSS mathematics connections for HS-ESS2-6 as well- they are all in reference to choosing appropriate units, measurement accuracy, and similar concepts that seem highly relevant to creating this kind of representation.

Supposed misalignment #8:

HS-ESS3-3: Create a computational simulation to illustrate the relationships among management of natural resources, the sustainability of human populations, and biodiversity. [Clarification Statement: Examples of factors that affect the management of natural resources include costs of resource extraction and waste management, per-capita consumption, and the development of new technologies. Examples of factors that affect human sustainability include agricultural efficiency, levels of conservation, and urban planning.] [Assessment Boundary: Assessment for computational simulations is limited to using provided multi-parameter programs or constructing simplified spreadsheet calculations.]

The Fordham report makes similar criticisms about “nothing in the CCSSM” allowing a student to do this. The CCSS connections cited in the NGSS are related to math practices, which certainly seems reasonable. Here’s a great little simulation that I’d use with students for this. Spreadsheet calculations seem equally appropriate. I just don’t see what’s so “mis-aligned” about expecting students to apply basic ideas of addition, subtraction, division, ratios, equations etc. to understand ideas like per-capita consumption.

The lone true misalignment: 

5-ESS2-2: Describe and graph the amounts and percentages of water and fresh water in various reservoirs to provide evidence about the distribution of water on Earth. [Assessment Boundary: Assessment is limited to oceans, lakes, rivers, glaciers, ground water, and polar ice caps, and does not include the atmosphere.]

The Fordham report rightly notes that percentages are not taught in the math CCSS until grade 6, so this is off by a year. Although you could theoretically graph a percentage without knowing how to calculate one, this is a definite misalignment, although hardly one that goes “well beyond” the math expected for the grade level. Perhaps this could be taught at the end of 5th grade in preparation for 6th grade?


There is simply no evidence for the kind of serious misalignment that the Fordham report claims. I have to assume that the report has shown the most clearly egregious examples the authors found to make its points, but the only real misalignment noted is a minor one. I don’t doubt that there are imperfections in the NGSS/CCSSM connections, and I wish that these connections had been included in the publicly released drafts of the standards, so that the science education community could have helped to troubleshoot these. But if there are serious alignment issues, the Fordham report has not identified them.

Posted in Education, Science | Tagged , , , | 1 Comment

On Methods Versus Aims

Reading Daisy Christodoulou’s preview of her recent book, Seven Myths About Education, I was especially struck by Myth 6- that projects and activities are the best way to learn. This quote resonated with me in particular:

Our aim should be for pupils to be able to tackle real-world problems by the end of their education; that does not mean that our method should involve endless practice of real-world problems. This is because real-world problems often involve a great deal of distracting information which overwhelms working memory. Likewise, our final aim should be for pupils to work independently; this does not mean that constant independent learning will achieve this aim.

It seems clear that we should choose educational methods based primarily on what helps students learn most effectively and efficiently. After all, that’s the goal of education, right?

The question of aims versus methods reminded me of the recent discussion regarding content versus practices in the structure of the Next Generation Science Standards (NGSS). The NGSS lays out a series of aims for students, referred to as “performance expectatations.” For example, here’s a fifth grade performance expectation:

5-ESS1-2: Represent data in graphical displays to reveal patterns of daily changes in length and direction of shadows, day and night, and the seasonal appearance of some stars in the night sky.

Some recent criticism has argued that the “representing data” part of this performance expectation (the science practice) should be separated from the understanding of “daily changes” (the science content). Many of the arguments for and against this arrangement seem to center around pedagogical arguments, but the NGSS is not about pedagogy. Standards in general are not about pedagogy, that is, they are not about educational methods. They are about aims, the goals of education. Thus, the NGSS do not prescribe a particular method of instruction.

In the case of science standards, it is critical that students both know the science content and be able to apply it in context. Going back to the example above, students need to be able to do things like take scientific data and graph it. It doesn’t have to be data about day and night necessarily, but the aim needs to not be just “know that there are patterns in the length of day and night” but to be able to do something with that information. By not making this “do something with it” requirement a clear enough part of the aim, many existing standards are falling short.

You can debate about the best methods used to reach educational aims, but I think this quote from the NRC report How People Learn is instructive:

Are some… teaching techniques better than others? Is lecturing a poor way to teach, as many seem to claim? Is cooperative learning effective? Do attempts to use computers (technology-enhanced teaching) help achievement or hurt it?

This volume suggests that these are the wrong questions. Asking which teaching technique is best is analogous to asking which tool is best—a hammer, a screwdriver, a knife, or pliers. In teaching as in carpentry, the selection of tools depends on the task at hand and the materials one is working with. Books and lectures can be wonderfully efficient modes of transmitting new information for learning, exciting the imagination, and honing students’ critical faculties—but one would choose other kinds of activities to elicit from students their preconceptions and level of understanding, or to help them see the power of using meta-cognitive strategies to monitor their learning. Hands-on experiments can be a powerful way to ground emergent knowledge, but they do not alone evoke the underlying conceptual understandings that aid generalization. There is no universal best teaching practice.

In other words, we should not be dogmatic about different pedagogical methods, and instead start choosing methods selectively, like tools, based on what we need to do with a particular student or group of students that day. There’s nothing wrong with a good lecture, or a good inquiry-based activity, as long as each is in an appropriate context. And most importantly, as long as the end result is efficient and effective student learning.

Posted in Education, Science | Tagged , , , | Leave a comment

Do You Really Need to Know Facts?

Heard this before? “Students in the 21st century don’t need to memorize facts. They can always just look it up online.”

There’s a good chance you have strong feelings about this statement. But I’d argue what’s crucial is which facts we’re talking about.

For example, memorizing all the state capitals in the U.S. is probably not useful. That’s something you can easily look up online at any time. But more importantly, knowing the names of the capitals does not contribute to a larger conceptual framework or a deeper understanding. In other words, it can’t be a building block for constructing more fundamental meaning. Hence there’s little reason for most people to commit the capitals to memory.

On the other hand, the number of electrons that can be in each shell of an atom is also a fact, and also something you could look up online. But if you have that information memorized, you can build on that to explain why sodium and chlorine bond to form table salt… or why any two atoms will or will not bond for that matter. Sure, you could look that up for a while, but at some point you need to have that information memorized to have any real understanding of chemical reactions. Knowing those facts is a fundamental building block of understanding, shallow or deep. We could find similar examples of factual “building blocks” for history, math, or any other subject area.

So instead of either elevating or disparaging the memorization of facts, we should think carefully about which facts students need to know. And just as importantly, we need to consider what are they going to be doing with those facts. Because there’s not much use in knowing facts about electrons if you can’t explain how electrons create chemistry… but you can’t explain chemistry without knowing facts about electrons.





Posted in Education | Tagged , , , | 2 Comments

When is “Surface-Level” Knowledge Good Enough?

Frequently we hear about the importance of having a “deeper” understanding of ideas and concepts. By this, people usually mean an understanding that goes beyond simple memorization of facts, including the ability to apply their understanding to (appropriate) new contexts. Usually this goal of deeper understanding is presented as an unquestionably good thing, and one a sense it is- more depth of understanding in and of itself is obviously positive. However, when it comes down to choosing what should be taught, we have only a finite amount of time. The trade-off with greater depth of understanding is the greater amount of time required to achieve that level of understanding.

So I was interested to read Robert Marzano’s conclusion in Building Background Knowledge for Academic Achievement that even surface level background knowledge can be very useful. He uses the example of the word “correlation,” which many adults are familiar with in a general sense as meaning that when one thing increases, the other increases. Few people actually understand even relatively basic technical details of a statistical correlation, or could apply the concept in a statistical analysis, but their knowledge is nonetheless extremely useful because the concept of a correlation appears relatively frequently in many settings.

This implies that it’s worth thinking seriously about the depth of knowledge that we think is necessary for concepts and ideas, and to make sure that we choose to dive deeply only into the concepts that are most essential.

At the same time, there are probably a good number of things that it’s OK to be familiar with only at the level of a general understanding. Moreover, this kind of basic background knowledge can be one of the main challenges for disadvantaged students, who often don’t get the same exposure to a broad range of experiences as students from a more privileged background. Lacking basic background knowledge can be a major barrier to initial understanding of new concepts, or for general reading comprehension. According to Marzano, even activities such as watching educational television can be a big help in this area, but sustained silent reading programs or other methods of building background knowledge can be effective as well.

Regardless of the method, it seems that a key component of improved reading comprehension and general academic success is developing a sufficiently large pool of background information. As a consequence, educational programs should balance both depth of understanding of key areas and general breadth of background knowledge.

Posted in Education | Tagged , , , , , , | Leave a comment