fbpx

Deconstructing Constructed – Testing Testing 1 2 3

Read Josh Silvestri every week... at StarCityGames.com!
Tuesday, September 9th – One of the big questions I always hear at tournaments is “how much did you test?” This is followed with the inevitable “was it was worth it?” Typically the answer is yes, but other times you’ll hear some long-winded admission of general failure, or just general disinterest that resulted from the process

I mean listen, we’re sitting here talking about practice, not a game, not a game, not a game, but we’re talking about practice. Not the game that I go out there and die for and play every game last it’s my last but we’re talking about practice man. How silly is that?
Allen Iverson

One of the big questions I always hear at tournaments is “how much did you test?” This is followed with the inevitable “was it was worth it?”

Typically the answer is yes, but other times you’ll hear some long-winded admission of general failure, or just general disinterest that resulted from the process. Sometimes you can also infer it from the deck selections that people make before a Grand Prix or Pro Tour, where you see groups split, playing whatever they felt comfortable with individually. The most obvious pointer is where a group decides on a deck and then bombs miserably… of course, the testing is likely at fault; not inherently, but rather because the data and variables were flawed in a way that wasn’t caught.

Some of the more common pitfalls that sneak into the testing process are surprisingly difficult to avoid, especially if you don’t realize it early on. The reason is simple. Success is seducing. The better you do, regardless of why, the more you continue down the path to ruin. The more you get attached to a deck and the subsequent results with it, the more difficult it is to change. I have a friend who does this nearly every season… he picks a deck early on to try and fight through the field, and by the time it becomes a good idea to switch he simply can’t. He hasn’t practiced with anything else, nor has he enough raw skill to mise it out with a new deck selection.

Unless you happen upon the objectively best deck early on in testing, it’s a bad idea to get hooked on one deck and just end up proclaiming it the best. This is especially true if you don’t do your testing in more controlled settings where the level of skill involved is either unknown or irrelevant (a.k.a. the outside influences on the final results). The issue of unknown skill-sets marginalizing results has always existed, but Magic Online has made the issue more widespread. People tend to assume if they do very well online this will necessarily translate over to Paper Magic. And although this can sometimes be the case, more often than not there’s something missing from the equation that can cost you.

Not only is this a bad idea for your own sake, but the results of your testing group can become skewed quickly. Once the taint spreads, then it becomes very difficult to reverse, especially in larger groups with people who haven’t put in the same amount of effort. You know that guy… the guy who reads the decks on the net and in articles and just chats everyone up because he really needs an idea to latch onto. He finds something that fits his criteria of good and then goes on from there, steamrolling people on the mailing list and refusing to play other things at testing sessions, because damnit, he has the best deck, and why aren’t you listening yet?

Power comes from lying. Lying big, and gettin’ the whole damn world to play along with you. Once you got everybody agreeing with what they know in their hearts ain’t true, you’ve got ’em by the balls.
Sin City

Realistically, many people are just looking for an in to the format, an edge, and it’s human nature to take the path of least resistance and give into snake-oil peddling. Once upon a time, Jarrod Bright made this his personal conviction, and every article slammed people for playing BG.dec wrong. Of course it was the best deck! All indicators in his testing said so! This is actually a common trend throughout the history of The Rock and all variations thereof; even Richard Feldman fell into this trap before (although he argued better than the vast majority), but I digress.

The point was that people who had tested a little bit tended to just take some of these statements at face value, and thus sank more time into trying to improve the results and the deck. At some point, reasonable people stopped and really looked at what the deck was trying to accomplish, eventually discarding the deck for not being what it was cracked up to be. A more recent example of this could be Dredge from the Extended season, especially at the Pro Tour and Grand Prix level. Was it the best deck? Yes. Was it objectively the best deck after word spread around to literally eighty percent or more of the room? Not so much. Some people still succeeded with the deck, but others simply fell into line behind the hype and were executed accordingly. There are plenty of other cases like this if you look, the most notable one (off the top of my head) being one of the Grand Prix tournaments for Onslaught Block Constructed. Everyone knew Goblins was great, and the best deck going in, so every relevant person not playing Goblins was rocking the anti-deck*, Slide. Only a couple of Goblins decks made Day 2, and Slide eventually won the whole thing.

* There were a few other decks that popped up, like ‘Bad Form,’ but the basic point is that everyone knew and aimed at the best deck and kneecapped it before the tournament ever started.

Despite the rather public knowledge about Goblins, many testing groups stopped at the point in which they felt they had the best deck and simply ran with it. This can be another trap, in which the group-think takes over and people stop looking at the ramifications of a public ‘best deck’ regardless of whether it is or it isn’t. Perception can be just as deadly as a deckbuilding error if it leads to you playing something either everyone is expecting, yet you can’t power through. So can simply playing the supposed best deck based on conjecture and theory rather than a factual basis. If you know the best deck is definitely the best deck, and that you can simply power through common hate and the perceived decks that defeat it through testing, then play it! Trix was a great example of this: it could lose to specific cards and had problems with certain niche decks, but absolutely destroyed the bulk of the field and still had a great shot against dedicated hate. Affinity is another example of a deck living up to all the hype associated with it.

Of course, the opposite can be true. Perhaps someone on the list develops a counter-strategy that legitimately tanks the best deck and doesn’t scoop to everything else in the field. What then? You have to weigh the reported facts and the amount of time before the event against the value of this strategy being successful. For a brief time in Onslaught Block Constructed, I had figured out the key to the format. I had completely obliterated any sense of competition from it… all I had to do was convince the people in my testing group to play it before their varied PTQs. I couldn’t do it. Not even close. There simply wasn’t enough time for the serious players to attempt to learn and test my deck sufficiently before their various events, and the rest were simply too skeptical to give it a fair shot. Part of this is because we had done all of our discussion online, and hence I couldn’t just flip the table during a test session and beat them with it until they caved in to my demands. The other part is that it becomes difficult to explain in detail exactly how to play a complicated counter strategy deck when the counter plan doesn’t resolve around one or two cards.

You need to give people time and reason to have them credibly test your ideas, and ideally you’d be there to help guide them along the way to get them started. Incidentally, this is also why whenever a forum thread from someguy9001 decides to respond to all arguments against the deck with ‘test it!’ they will be told to suck it and/or ignored. People’s time is precious, especially if you already have a goal in mind for the event that you haven’t reached yet. Someone once said, and I believe it was Ken Krouner, that once you’ve found the best deck it should be involved in every single game until the time of the actual tournament. Game plans, deckbuilding, results, and all information derived from said testing should be shared as often as possible to keep everyone on the same page, because ultimately the goal of the group is to have as many people as possible be successful. The concepts behind a testing group will fade and collapse after a single event if this is ignored. You don’t want to be the best out of everyone because you figured out another edge along the way and didn’t share; on some level, you should want the collective to succeed.

As for game selection and testing itself, they really deserve a full article, but the cliff notes version is this: you should be playing against what the public perception thinks is the best deck, what you think the best deck is (under the assumption that other people will actually play it), and the three most popular decks. Perhaps this sounds trite and overused, but I find it to be just as true now as back when I first started taking Magic seriously. The idea is, in what will assuredly be a limited time frame, to get accurate results against the most common and most powerful decks in the field, which can then be translated to deck tweaks, revisions, or even straight audibles with time to retest.

In a group with limited player quality, you want the best one among you to try and spread themselves between many different decks. Being competent and letting the natural skill boost their results is going to always skew things, so at least spread out all the results that have the same sort of curve attached to them. Ultimately the goal of the group should be to become competent, if not proficient with many different styles of decks, but more often than not people will cling to one or two deck choices throughout testing and make life difficult. Ultimately the best way to deal with this is a mixture of full matches against different players even if the skill levels differ, especially if you have an idea of the general skill level going into the event itself.

Keeping organized notes is another must-have, even if this means one person needs to suck it up for each session and keep track of results in-depth. Straight numbers are far less helpful than noted trends in gameplay and shorthand notes about why each deck won or lost individual games. One of the most frequent errors I see made is the non-addition of marks about mulligans, color-screw, flood, or any other factor that basically ends the game for one player regardless of what the opponent does. Not only does it let you look for trends about the mana / card power ratio in general, but you can then take the results a step further and look for what hands you can beat even when at a disadvantage. The same goes for picking a plan of attack in each individual game… this is especially true with control or hybrid decks that allow for more control over card quality and levels of aggression. It also can make sharing a general game plan far easier for other people in your test group who weren’t present.

The final thing to keep in mind is that ultimately you want to find out why the games are being decided the way that they are. Natural deck advantage, strategy superiority, consistency, more opportunities for a skilled opponent to outplay you, etc… the entire idea behind the set-up of serious testing sessions is two-fold. First, gain the sheer numbers typically necessary to find a best deck, or at least an acceptable deck; second, help figure out the plan of attack that is most effective against the most popular and best decks in the format.

Josh Silvestri
Team Reflection
Email me at: joshDOTsilvestriATgmailDOTcom