Nikon 1 V1: after three weeks

I’m here to tell you that it’s possible to insert an SD card wrong-way-up, at least in this camera. Which broke the spring that pushes the card out. Which complicates card removal, but otherwise it seems to have done no damage.

Now on to the more important stuff. Keep in mind that we’re still in First Impressions territory. This ain’t a review–it’s a progress report….


Late Note 7/28/2013: I’ve (finally) posted a short review of the camera on my Flickr.

Nikon 1 V1

I like the V1 camera. I like it a lot. It takes excellent photographs, weighs little, and is generally easy to use. It’s reasonably flexible. But there are issues. What follows is largely a discussion of things I wish Nikon had done differently, so there’s some danger you’ll think I dislike the camera. That would be a false impression.

The Lenses

The small sensor permits designing a small camera. If you believe you can live with that compromise, the next question involves the quality of the system’s lenses.

I’ve now taken a few hundred photos with the 30-110 mm lens, and continue to shoot with the 10-30. Both seem to be good-quality optics, but the 110 zoom is barely long enough for my usual purposes. I’ve certainly got a suitable big-camera Nikkor lens in my collection, and will likely buy the FT lens mount to address this, but I’d prefer that Nikon offer a longer lens that’s designed for the camera. (A wider-angle option would be real nice, too, but may be asking too much.)

Both Nikon 1 photo zoom lenses close in a retracted mode. When the lenses are retracted, the camera won’t take photographs, but pushing a button to extend the lens seems like an extra step. This is occasionally annoying, but I can tolerate it. I suppose it’s part of the price for the camera’s compact form factor.

Accessorizing

Since we last chatted, I’ve made a couple noteworthy modifications to the camera-as-carried. I’m now using Nikon’s “official” Nikon 1 wrist strap, and can’t say I find it better than the string strap it replaced. I imagine I’ll continue to seek a solution I actually like. I’ve also added Richard Franiec’s V1 Grip to the camera, which I like enormously; we’ve also added Freniac’s J1 grip to Joan’s camera (she likes the grip, but wishes it matched the white body and lenses). Those are seventy well spent dollars, between the two cameras. My camera’s also sporting a mount for my monopod’s head, but we shan’t go into detail about that.

Excepting my photographer’s vest, this camera will not fit in any pocket I’m likely to wear. I’m using a Tamrac 5720 bag to store and carry the V1, the second lens, and whatever gear I think I need. It’s light, and fairly small; I’ll follow this route for a time. Joan’s stuffed her J1 and a similar kit into Tamrac’s 3440, which is smaller and designed to carry less additional gear. We could probably trade bags and both be happy.

Two Shutters

I’m certain there’s a good reason Nikon equipped this camera with both mechanical and electronic shutters, but it’s not yet clear to me how to choose. Both shutters are very fast, and for most purposes seem interchangeable. There are a few electronic-shutter features which are clearly advantageous in some circumstances, but you give up significant exposure control if you use this camera in its high-speed modes.

If someone has a useful comment, I’d certainly like to hear it.

The Nikon 1 V1 Viewfinder

The V1’s viewfinder is, well, interesting. In normal usage it’s excellent. The viewfinder presents the image more or less as the camera settings impact the photo, and reports many of the camera settings around the frame. For composing a stable, well-defined photograph, this is an excellent tool. But if you’re setting up an action shot, the view is less rosy.

I’ve not found a way to turn off the review feature. (I’m hoping I’ve missed a setting, here. Can anyone help?) The V1 displays the image you just shot after you press the shutter release. This can be over-ridden by a partial-press of the shutter button, but that’s painful. If, like me, you often follow a shot with an immediate reframe/refocus/SHOOT, the extra partial clicks will mess up your rhythm.

Another viewfinder annoyance is the wake-up delay after you’ve stopped shooting, which seems to be around one second. How much that matters will depend a lot on your photographic habits, but I guarantee it will cost you an occasional unexpected shot. For this shooter, at least, battery savings are not the absolute first priority; that’s what spares are for. I often carry my camera “hot,” anticipating opportunistic photographs.

Finally: The V1 viewfinder continues to disagree with my sunglasses, which is mostly an annoyance. (Anyone else having this problem? I imagine it’s sunglass-specific; mine are a mild grey prescription lens.)

The V1 Controls

As I said, this is not a review.

I have several concerns, but the main issue is that Nikon clearly doesn’t consider this camera an SLR, despite the SLR-like layout and purchase price. That the controls are differently arranged than I’m accustomed to is an adjustment I can make. That there are fewer non-menu controls is pretty much a given, as the camera has less available mounting surface. On the other hand, the specific external controls Nikon selected are certainly debatable, and the arrangement–regardless of logic–is probably less than optimal. The dial which changes camera modes is too easily changed, for instance, and I’ve discovered that I can accidentally press several of the controls just by securely bracing the camera with my right hand.

A couple specific complaints: In my normal routine, I regularly change between auto-focus and manual focus. My D300 handles this with a switch; it’s a buried menu item on the V1. And I’ve come to depend on the D300’s Shooting Bank memory settings; that the V1 has no equivalent feature will certainly cause me endless frustration. (Yes, this is an advanced feature. But the V1 is complex capable enough that photographers could profitably use it.)

My more general response, though, is that I’m still learning how to use this camera. I’ll have a better critique later.

Is It Any Good?

Yes, but I’m not sure who the market is. All cameras are compromises, and compromised. The useful question is whether the specific compromises instantiated in the Nikon 1 V1 are something I (or you, of course) can live with. For me, I suspect the answer is yes.

I love my D300. We’ve taken thousands of photographs, and I no longer give much thought to anything except “Which lens should I use?” and “What are the right presets?” If something unexpected comes up, I can generally find a better setting within seconds, because I’m accustomed to the system and the design’s efficient. I’m extremely comfortable with the camera. But it’s a heavy and obtrusive beast, and I’ve grown weary of those features. I’ve been considering alternatives for at least a year.

The best cameras get out of the photographer’s way. Good point-and-shoots accomplish this by automating nearly everything, at the price of flexibility and (for some photographers) creativity. While professional cameras these days are also highly automated, they tend in another direction, by making controls easily accessible; the price is a sometimes intimidating level of complexity (also a creativity barrier, for many). By this test the V1 is a poorly implemented, SLR-derived, design. Most of the professional-camera controls are there. But they’re decidedly not easy to reach. Using this camera will involve devising strategies for working around that design failure.

A better V1 would mimic the D300’s efficiency, and I expect that future iterations will do so. Nikon could certainly make a version of this camera I’d unabashedly love. But Nikon’s marketers clearly don’t recognize that I exist, and that they might wisely serve my needs. There’s ample evidence that I’m not the only photographer seeking such a solution, and it’s clear that some of Nikon’s competitors are more directly addressing these concerns.


The V1 will be my primary camera for the next few months. We’ll see what I’m saying about it when summer ends.

Revision History:

The Dream Machine by Mitchell Waldrop: a short review

The Dream Machine, which is nominally a biography of J.C.R. Licklider, is actually an overview of the history of computing from M.I.T.’s Whirlwind effort through the beginnings of true personal computing in Silicon Valley; much of the book concerns ARPA and ARPAnet. Lick’s biography is embedded in the story, but its purpose is to center the discussion. The predominant focus of the book is on the efforts of Licklider’s colleagues, and it often strays far from his life story.

This is a terrific book. The writing is lucid, the research–though predominantly from secondary sources–is excellent. If you plan to read one book about the ARPA computing effort, this should be that book.


This short review was originally published on LibraryThing.

Revision History:

I’m Feeling Lucky by Douglas Edwards: a short review

This is a better book than I anticipated. Edwards–one of Google’s earliest hires–was obviously fascinated by Google’s founders, and the culture of the company they created. We watch as they repeatedly reorganize the leadership structure–an important concern for a middle manager–and as the author learns how he can contribute to the company. It’s an interesting, nitty-gritty view of the office (and its politics) from a privileged seat. This is well worth your time.

Google has resemblances to Carnegie Steel. Like Carnegie, Google is closely controlled, respects statistics, and is consciously disruptive. New technology is constantly put in place; failed projects are scrapped and forgotten. The leadership worries a lot about competitors, and embraces change as a competitive tool. Small edges are constantly devised and implemented, while big, industry-changing innovations are rolled out with astonishing regularity. Also: Like Andrew Carnegie, Sergey Brin and Larry Page are kinda preachy, and seem blind to some of the impacts and pitfalls of their colossus.

Andrew Carnegie eventually retired, and worked hard at giving away his fortune. His successors–JP Morgan allies–rebuilt the company into another model. It seems probable that Google will meet a similar fate, and that worries me far more than the casual arrogance of the company founders.


This short review was originally published on LibraryThing.

Nikon 1 V1: after one day

Nikon 1 V1

These notes are from someone who’s long used film and digital SLRs. Folks considering moving up from a point and shoot camera may or may not find them useful.

Some more or less random comments after spending yesterday experimenting with my new camera. This is not intended as a full-out review; it’s just my first impressions–and these opinions may well change after I’ve had time to better acquaint myself with the new system. Since I took over a year to get comfortable with my D300, it may be months before I’m satisfied I understand this camera. It’s perhaps useful to know that I’ve so far used mostly the 10-30 mm lens and have only used the V1 in Still Camera mode. I took a few more than 100 photographs yesterday.

First thing: In use, the camera feels like a miniature (D)SLR. The miniature part of that sentence is important because the size will require some adjustments to my habits; the SLR part’s important because that’s what I was hoping it would feel like. (In contrast, Joan’s J1 strikes me as a big point-and-shoot, even though they’re incarnations of the same basic design.) I’ve had prior experience with small SLRs, as my primary camera was a Minolta Zoom 110 for a year or two; my brother owned a (much more capable) Olympus OM-1 at the same time and used it to take excellent photographs.

Second thing: The V1’s capable of taking fine photographs. I was experimenting yesterday, so taking quality shots wasn’t my first concern, but I was quite satisfied with a few of the pics.

The V1’s electronic viewfinder’s impressive–bright and surprisingly sharp–but has two or three quirks. The more important quirk is that it doesn’t get along with my sunglasses, which make the image look like a failing television. While I’ll certainly adjust, that’s annoying. A lesser annoyance is that the finder goes to preview mode immediately after taking a picture (pressing the shutter release clears this). The viewer also turns itself off if you stop taking pictures, which is an entirely new viewfinder experience. All three quirks have the potential to cost me an occasional photograph. The viewfinder displays a whole lot of nicely-arranged icons reporting the status of nearly everything, which I trust I’ll find useful when I stop complaining about the sunglasses. (In real life, though, I only occasionally check those on the D300, though I’m certain others find them essential. I may decide mostly to turn them off.) Of course it’s also possible to use the LCD “monitor” display as a viewfinder; I was doing that to frame flower shots yesterday, and will likely continue to do so.

On the fly camera adjustments will require relearning a bunch of habits. In particular, changing the aperture (or shutter speed) with a switch seems quite odd, but is something I can learn. I’m definitely not yet comfortable enough with the camera to discuss the overall competence of the controls, but it’s already clear enough that the design assumes I’m moving “up” the Nikon product line from a P&S, not “down” from a DSLR. They perhaps don’t understand this part of the market.

Joan and I have contrasting viewpoints about the Nikon 1 menu system. To Joan, coming to the J1 from a point-and-shoot background, the menues seem long and complicated. Compared to my D300 the menues seem abbreviated and occasionally disappointing. I already know I’m going to miss my D300 presets. (I’m old enough to remember IBM’s PCjr. Some of Nikon’s design decisions have that feel.)

Just holding the camera’s going to require some rethinking. Using my left hand to hold the lens and brace the camera just isn’t going to work the same as has been my practice. Not only is the lens too small for that approach to be realistic, but the camera’s so light that it may be counterproductive. I’m still playing with that.

I’ve long used a wrist strap–mine wraps around my hand, more or less like a glove–to hold my camera, both because I dislike shoulder straps and because the hand strap helps to stabilize the shot. My strap is part of the reason I can successfully hand hold a long lens under ballpark lights. (Yeah, this is a personal quirk.) Finding a similar solution for this camera may be a challenge–particularly since the shutter trigger’s right next to the strap connector on the Nikon 1 body. At the moment I’m using a simple wrist strap I borrowed from an old P&S, but that’s not where I want to get. The borrowed strap will work for now, and I’ll experiment until I’m happy. Or at least satisfied that I can’t fix this.

I’ll also need to figure out how to pack this camera. My D300 lives in a Tamrac holster, and I hang a spare lens and other gear off the sides of the bag. With this lightweight camera I’ll likely do something simpler. (I go through this routine every time I buy a new camera. We’ll have to see how things shake out.)

Finally, I’ll be upgrading my software to support the new camera. It looks like Photoshop Elements 8 doesn’t support this camera’s RAW (NEF) format, and although Bibble 5 does support the camera, the product’s been sold to Corel and Bibble will not be getting further updates. Whether I just make the obvious upgrades (PSE 10 and Corel’s AfterShot) or switch to something else remains to be seen. This is complicated, slightly, because I’m simultaneously moving my computing from a Mac to a PC.

Last words: It’s too early to tell, really, but so far I like the camera. It remains to be seen whether the transition’s going to be painful or joyful. I expect compromises; the ultimate question is whether the design is too compromised for my comfort.

Revised on 3/18: Mostly I just polished the language a bit, but I made a significant change to my description of the Viewfinder behavior.

Revision History:

Computing in the Middle Ages by Severo Ornstein: a review

This was definitely not the book I expected, but is well worth reading. Ornstein has things to say, and knows how to say them.

The author was involved in computing from the mid-fifties to the early eighties, and played fairly important roles in the SAGE, TX-2, and Linc projects, all of which are key to understanding how computing developed. He also was heavily involved in BBN’s pioneering Arpanet efforts, and moved on to Xerox PARC in its prime, where he helped design the first laser printer. So he had a first-hand view of the development of electronic computing in the period between the pioneering efforts and the beginnings of microcomputing. This is a different, quite personal, account of what his computing projects were like, and his assessment of the issues as they looked to the participants during the period.

So there’s little new here, but there’s a level of detail about specific efforts, and about the personnel involved, that the journalists and historians who’ve tackled the topics lack. There’s also a quite-deliberate recasting of the context, which is Ornstein’s excuse for writing the book; he thinks the more formal histories impose more design (or perhaps a destiny) on the efforts than was actually there. Interesting stuff, with wry humor.

The chapters have odd, amusingly victorian, titles. For instance, Chapter 5: “A piano enters the lab and comes up against TX-2. DEC is formed and there is an error on Page 217. Fourier is proven sound and we land on an aircraft carrier.” The book might be worth reading just for those.


This review was originally published on LibraryThing.

Revision History:

Electronic Computers by Saul Rosen: a review

A journal article, not a book; available here.

This is easily the best short survey of the early history of computing I’ve seen, and is well worth a read. It’s an excellent 30 page essay describing electronic computing history through the late 1960s, with most significant projects and companies briefly sketched and their contributions–and failures–described. The essay is organized by technological era (vacuum tube, transistor, early ICs), with each era’s discussion organized by company or project. Some effort is made to put each project into historical and technical context.

The author mostly ignores early electo-mechanical computing projects, and almost completely ignores efforts outside the United States. This significant shortcoming is acknowledged in the introductory section.

While the emphasis is on describing projects, the author provides quite a bit of analysis. Details, of course, are sacrificed for brevity’s sake, and for focus. A fascinating, and well-done, survey.


This review is also posted on LibraryThing.

Revision History:

In the Plex by Steven Levy: a short review

Not really a review; just a couple comments….

Good book, but probably a hundred pages too long. If you’ve followed Google’s history over the years, you’ll learn some interesting things but you’ll have to slog through lots of stuff that you already knew. (Not a sin, really; just a fact.)

This book is not likely to make you love the company. Larry Page, in particular, comes off as a brilliant idiot.


This short review was originally published on LibraryThing.

Revision History:

Beautiful Code edited by Andy Oram: a short review

Essentially, this is thirty or so authors’ takes on what makes code beautiful. The approaches vary widely, as does the writing. Some essays are full of code, others of theory, still others mix the two. There are a handful I’d not call beautiful. This is rough and slow reading, and a very long book.

Parts are over my head, of course; the book’s clearly intended that way. But parts are just wonderful, and make the book worthwhile. I’m guessing each reader will prefer different essays.


This short review was originally published on LibraryThing.

Revision History:

Beloit Snappers @ Quad Cities River Bandits, September 3, 2009

John O'Donnell Stadium

Another tl;dr essay discussing Jeff Sackmann’s minor league play-by-play data; the first was here. This will be far more understandable if you have worked with Retrosheet event files than if you’ve not, though anyone who habitually scores ballgames can likely follow the discussion if they’re really determined. Retrosheet file documentation begins here, and BEVENT’s default output is described near the end of this file.

Out of the Box

It may be helpful to start with a box score. This was generated from the Sackmann event file by Retrosheet’s program BOX for the September 3, 2009, game I mentioned in the title.

     Game of 9/3/2009 -- Beloit at Quad Cities (N)

  Beloit        AB  R  H RBI    Quad Cities   AB  R  H RBI  
Beresford J, ss  1  1  0  0   Ingram D, cf     3  1  1  1   
De La Osa D, ss  4  0  1  1   Stidham J, 2B    4  0  0  0   
Thompson D, 2b   4  0  0  0   Curtis J, 3b     3  1  1  1   
Hicks A, cf      3  1  1  0   Scruggs X, 1b    5  1  1  0   
Waltenbury J, 1b 4  0  0  1   Racobaldo R, dh  5  1  1  2   
Rams D, c        4  0  1  1   Parejo F, lf     3  2  3  1   
Harrington M, lf 3  0  0  0   Rodriguez R, rf  3  1  1  0   
Hanson N, 3b     4  0  1  0   Cawley J, c      4  1  2  3   
Severino A, dh   3  0  1  0   Bolivar D, ss    4  0  0  0   
Morales A, rf    4  0  1  0   
                -- -- -- --                   -- -- -- --
                34  2  6  3                   34  8 10  8   

Beloit           111 000 000 --  3
Quad Cities      123 020 00x --  8
  1 out when game ended.

  Beloit               IP  H  R ER BB SO
Hendriks L            4.0  8  6  0  1  3
Marquez W             2.1  2  2  0  4  2
Stillings B           2.0  0  0  0  1  2

  Quad Cities          IP  H  R ER BB SO
Miller S              1.0  1  1  0  1  1
McGregor S            6.0  4  1  0  1 11
Delgado R             1.1  1  0  0  0  2

E -- Bolivar D, Thompson D, Hicks A 2, Scruggs X
DP -- Beloit 1
LOB -- Beloit 10, Quad Cities 7
2B -- Curtis J, Scruggs X
3B -- Hicks A, Morales A
SB -- Ingram D, Severino A, Hanson N, Harrington M
CS -- Ingram D
HBP -- by Marquez W (Curtis J), by Delgado R (Harrington 
  M)
WP -- Hendriks L, Marquez W 3
PB -- Rams D, Cawley J
T -- 0:00
A -- 0

You may wish to compare this box to MiLB’s box for the same game. Even without comparing, though, two issues are readily apparent. First off, it’s difficult to imagine why an 8-3 game would end with one out in the ninth. Baseball just doesn’t work like that. Similarly puzzling are the innings totals for both pitching staffs: It seems that this was indeed an 8 1/3 inning game.

Comparisons with the MiLB box raise some more flags: Ten of the hitters have different counts in AB, R, H, and/or RBI. Four of the pitchers differ in IP, R, H, BB, and/or SO. (ER is a separate issue, not under discussion today.) I see other differences elsewhere, but see no need to go into detail. I think I’ve demonstrated that there are problems here, folks. Let’s see if we can figure them out.

Some Useful Background

Jeff Sackmann collected several years’ minor league play-by-play data to use for a specific project, his Minor League Splits website. He’s discontinued that project, but has voluntarily shared the underlying data with other researchers. There are problems, which he recognizes, with the data store, and I’m exploring the scope of those. I have some questions which can only be examined with minor league play-by-play data, so it’s necessary that I understand this data and its shortcomings.

Sackmann built his data store by collecting the game accounts on the Minor League Baseball (MiLB) website with a bot, then running them through a program which I usually call the Sackmann parser. Since the Sackmann files are nominally in Retrosheet (RS) format, my immediate project is to run those files through what you might call a translator, called BEVENT, which converts RS files to a standard database format and is available from the Retrosheet website. This is a progress report on that conversion project. I gave a preliminary report about the effort a couple weeks ago in a prior essay.

I’ve been using Jeff’s 2009 Midwest League event file, which contains game accounts for the entire 2009 season, for a testbed. Retrosheet VP Clem Comly has experimented some with the 2009 MWL file and reports that it averages two or three erroneous records per game. Erroneous, in this case, means records which won’t be interpreted correctly by the BEVENT parser. Since the league played about 1,000 games in 2009, including the championship playoff, that works out to 2,500 or so bad records in that play-by-play file, which contains 115,278 records overall (I’ve deleted one badly-damaged game account, For200908170, from the file, as has Clem). Averages can mislead, though, as the errors are clustered. Some of the clustering results from transcription errors which make subsequent, correct, records appear to be erroneous, thus creating an error cascade. The common case is a data record transcription which loses a putout, thus apparently extending the inning. This confuses BEVENT, which blindly assumes innings have three outs. So there are some game accounts with many errors, and many evidently-flawless game accounts.

That’s my paraphrase of Clem’s analysis, by the way. I believe this document summarizes his main points adequately, but it’s fair to say I’ve twisted his commentary around a bit.

An Example Game

Perhaps we’ll profit if we examine the Beloit/Quad Cities game which is incorrectly summarized above. Let’s compare three versions of the play-by-play:

  • The game as reported on the Minor League Baseball (MiLB) website.
  • Sackmann’s version, which reformats the MiLB report into Retrosheet format. (I’ve shortened Jeff’s team designators, but it’s otherwise an exact copy of his data. Within this essay I’ve also removed hit location data to reduce the clutter.)
  • BEVENT’s version, which reformats Sackmann’s into a database-friendly format. This essay shows only the first few fields of the standard BEVENT output, though the linked file has the complete standard output.

If you compare the files, you should be able to convince yourself that they’re the same game. For instance, all show the game’s first play as an error by the shortstop, and the last as fly to right. It shouldn’t take long to verify that all three show that the first inning ends with a shortstop-to-first groundout. Besides, they all claim to be the same game, which is presumably significant.

The data errors in the play-by-play are less obvious, and I’m pleased that Clem helped me identify those. Let’s take a little tour:

Second Inning

In the top half of the second, Angel Morales struck out, with some subsequent action on the basepaths. Here’s how the various versions record this:

  • MiLB: Angel Morales strikes out swinging. Adan Severino steals (3) 2nd base. Adan Severino advances to 3rd, on throwing error by catcher Jack Cawley.
  • Sackmann: play,2,0,519044,,,K+SB2;1-3(E2)(E2/TH)
  • BEVENT:
    Qua200909030,Bel,2,0,1,0,0,1,1,519044,?,543520,?,458733,,,K+SB2;1-3(E2)(E2/TH)
    • Key to the partial BEVENT output format I’m using here:
    • “Qua200909030”: Game ID, with home team embedded
    • “Bel”: Visiting Team
    • “2”: Inning
    • “0”: Team at Bat (0 = visitor, 1 = home)
    • “1”: Outs
    • “0”: Balls (never known in this file)
    • “0”: Strikes (likewise)
    • “1”: Visiting Team Score
    • “1”: Home Team Score
    • “519044”: Responsible Batter’s ID
    • “?”: Batter’s Handedness (missing in this specific file)
    • “543520”: Responsible Pitcher’s ID
    • “?”: Pitcher’s Handedness (missing in this specific file)
    • “458733”: ID of Runner on First
    • “”: ID of Runner on Second
    • “”: ID of Runner on Third
    • “K+SB2;1-3(E2)(E2/TH)”: Sackmann parser’s representation of the play, in Retrosheet notation
    • The player ID numbers are those assigned by the Minor League Baseball website (by Major League Baseball Advanced Media [MLBAM], actually); every professional player has one.

Sackmann’s parser made a mistake here; K+SB2;1-3(E2)(E2/TH) should have a period (dot) where the semicolon is, and this play would better have been scored K+SB2.1-3(E2/TH). (Note that the Sackmann parser double-reported the error.) All this matters because it confused BEVENT, which couldn’t interpret the code and left baserunner 458733 (that would be Severino) on first base, rather than third. Which caused problems for BEVENT on the next play:

  • MiLB: Dominic De La Osa singles on a line drive to right fielder Ryde Rodriguez. Adan Severino scores.
  • Sackmann: play,2,0,448279,,,S9/L.3-H
  • BEVENT:
    Qua200909030,Bel,2,0,2,0,0,1,1,448279,?,543520,?,,458733,,S9/L.3-H

BEVENT’s parser panics. “Hey, who’s this guy on third you’ve got scoring? And what am I supposed to do with the guy on first base? He’s in the batter’s way.” So the BEVENT-generated file has misplaced a run and lost track of the batter-runner. Not good.

The next batter grounded out to end both the inning and this short error cascade.

How often does this SB with subsequent play pattern/error occur? I estimate there are two or three hundred instances in the 2009 MWL event file. It looks to me like these could be fixed by running search-and-replace on the file a couple times.

Fourth Inning

The Quad Cities half of the fourth ended with a double play:

  • MiLB: Jermaine Curtis pops into double play in foul territory, first baseman Jon Waltenbury to pitcher Liam Hendriks. D’Marcus Ingram doubled off 1st.
  • Sackmann: play,4,1,543079,,,3/PF.?X?(31)
  • BEVENT:
    Qua200909030,Bel,4,1,1,0,0,3,6,543079,?,521230,?,502080,,,3/PF.?X?(31)

Oops. What’s that? ?X?(31) doesn’t mean anything to the BEVENT parser, which ignores it (notice 502080/Ingram still standing on first). Better if the Sackmann parser had coded this play as 3/FL/DP.1X1(31). (Clem counts this as two coding errors, by the way; one is purely technical and could be classified as a parser quirk.)

This data error is pretty serious. Instead of an inning-ending DP, BEVENT believes there are two out and a baserunner on first. This seems likely to have consequences. They’ll begin to show up on the next play:

  • MiLB: Drew Thompson grounds out to first baseman Xavier Scruggs.
  • Sackmann: play,5,0,458711,,,3/G
  • BEVENT:
    Qua200909030,Bel,4,1,2,0,0,3,6,489305,?,521230,?,502080,,,3/G

The most important thing to notice is that 4 following Bel in the BEVENT line: While the MiLB and Sackmann accounts of the game have moved on to the fifth inning, BEVENT thinks we’re still in the fourth. As far as this account is concerned, 489305/Drew Thompson has jumped teams and is now batting for Beloit; similarly, pitcher 521230/Liam Hendricks has been swapped to the River Bandits. And 502080/D’Marcus Ingram remains on first base. YIKES!

The next play looks like this:

  • MiLB: Aaron Hicks flies out to left fielder Frederick Parejo.
  • Sackmann: play,5,0,543305,,,7/F
  • BEVENT:
    Qua200909030,Bel,5,0,0,0,0,3,6,458711,?,543520,?,,,,7/F

We’ve straightened out the pitching situation–Scott McGregor’s magically appeared on the mound. And we’ve released Ingram from his baserunning duties so he can return to QC’s CF. But: We’re still lost track of one out. That will haunt us.

Fifth Inning

This sort of thing’s going to go on for the rest of the game. The bottom of the fifth starts with a pitching change–

  • MiLB: Pitcher Change: Winston Marquez replaces Liam Hendriks.
  • Sackmann: play,5,1,489305,,,NP
    sub,470504,Winston Marquez,0,0,1
  • BEVENT:
    Qua200909030,Bel,5,0,2,0,0,3,6,501858,?,543520,?,,,,K

–except BEVENT believes Beloit’s still at bat and recognizes 470504/Marquez as a Beloit pitcher. The program doesn’t make the substitution because Marquez shouldn’t be pitching for the opposition. Parsers can be quirky, folks. That BEVENT recognizes this is an error after missing a similar data conflict a few lines ago can likely be explained, but it’s still odd. And you could certainly make a case that it should stop processing the game and report an error when it finds this sort of contradiction.

Seventh Inning

The top of the seventh begins with a Drew Thompson single, then Aaron Hicks hits into a DP–

  • MiLB: Drew Thompson singles on a line drive to center fielder D’Marcus Ingram.
    Aaron Hicks grounds into double play, second baseman Jason Stidham to shortstop Domnit Bolivar to first baseman Xavier Scruggs. Drew Thompson out at 2nd.
  • Sackmann: play,7,0,458711,,,S8/L
    play,7,0,543305,,,46(1)3/GDP/G4
  • BEVENT:
    Qua200909030,Bel,6,1,2,0,0,3,8,521088,?,470504,?,543079,
    502781,502080,S8/L
    Qua200909030,Bel,6,1,2,0,0,3,8,527050,?,470504,?,521088,br/502781,502080,46(1)3/GDP/G4

–except BEVENT’s still in the sixth with the bases loaded. So it throws away the current runner on first (543079/Jermaine Curtis), replaces him with 521088/Thompson, and wipes out whichever of them is actually there on the subsequent DP. (No, I don’t know why it thought replacing the baserunner made sense. A while back it threw away the batter.) But it can’t be a DP; there are already two out. So we’ve misplaced another out, and will be off by two for the rest of the contest. This is getting pretty ugly, friends.

You should be getting the picture. Before the game ends we’ll see two more pitching changes that the BEVENT parser will mishandle, and there are certainly some impacts on nearly everything from having the first two batters’ results for each inning awarded to the opponent’s team. All because we missed the second out on a fourth inning double play.

So how often does the missed-double-play event error occur? Looks like there are about 50 in the 2009 Midwest League event file. These could reasonably, albeit inconveniently, be recovered by eyeballing the MiLB game accounts and manually fixing the data.

Back in the Box

So how did it do? If I apply those two fixes, does the BOX program generate the correct information? Let’s check:

     Game of 9/3/2009 -- Beloit at Quad Cities (N)

  Beloit        AB  R  H RBI    Quad Cities   AB  R  H RBI  
Beresford J, ss  1  1  0  0   Ingram D, cf     4  1  1  1   
De La Osa D, ss  4  0  1  1   Stidham J, 2b    4  0  0  0   
Thompson D, 2b   5  0  2  0   Curtis J, 3b     3  1  1  1   
Hicks A, cf      4  1  1  0   Scruggs X, 1b    4  1  1  0   
Waltenbury J, 1b 4  0  0  1   Racobaldo R, dh  4  1  1  2   
Rams D, c        4  0  1  1   Parejo F, lf     3  2  2  1   
Harrington M, lf 3  0  0  0   Rodriguez R, rf  3  1  1  0   
Hanson N, 3b     4  0  0  0   Cawley J, c      4  1  2  3   
Severino A, dh   4  1  2  0   Bolivar D, ss    3  0  0  0   
Morales A, rf    3  0  0  0   
                -- -- -- --                   -- -- -- --
                36  3  7  3                   32  8  9  8   

Beloit           111 000 000 --  3
Quad Cities      123 020 00x --  8

  Beloit               IP  H  R ER BB SO
Hendriks L            4.0  8  6  0  1  3
Marquez W             2.0  1  2  0  4  3
Stillings B           2.0  0  0  0  0  2

  Quad Cities          IP  H  R ER BB SO
Miller S              1.0  1  1  0  1  1
McGregor S            6.0  5  2  0  1  9
Delgado R             2.0  1  0  0  1  3

E -- Bolivar D, Thompson D, Cawley J, Hicks A 2, Scruggs X
DP -- Beloit 1, Quad Cities 1
LOB -- Beloit 9, Quad Cities 7
2B -- Curtis J, Scruggs X
3B -- Hicks A, Thompson D
SB -- Ingram D, Severino A 2, Hanson N
CS -- Ingram D
HBP -- by Marquez W (Curtis J), by Stillings B (Bolivar D)
WP -- Hendriks L, Marquez W 3
PB -- Rams D, Cawley J
T -- 0:00
A -- 0

Yes! That’s much better.


Where Does This Leave Us?

Sackmann’s parser made two significant errors in this game account, each of which generated problems on subsequent plays. These problems appear as five more data errors, because BEVENT mishandles them even though the plays (events) were correctly coded. That’s a common pattern in this data, and something we’ll need to give some thought. But I’m not ready to go there yet.

The next essay will address some technical points; then I’ll raise some questions for discussion.