LFBC and Claims about Adobe Software

The Long Form BC is Not a Forgery: Lay­ers and Illus­tra­tor
The new Birther claim is that Pres­i­dent Obama’s recent­ly released Long Form Birth Cer­tifi­cate is a forgery, because it was cre­at­ed in lay­ers. This is com­plete non­sense.  The President’s LFBC is a paper doc­u­ment. The PDF file post­ed on the White House web site is a scan of the paper doc­u­ment. A scan of a paper doc­u­ment can’t show how a doc­u­ment was or wasn’t cre­at­ed — it’s mere­ly a pic­ture of a phys­i­cal object. Birthers claim that “lay­ers” can be seen by open­ing the PDF in Adobe® Illus­tra­tor®.  Here’s a detailed expla­na­tion of how and why these “lay­ers” appear.

Com­pres­sion algo­rithms are soft­ware pro­grams that take advan­tage of redun­dan­cy in an image or data to reduce the size of the file and make it eas­i­er to store and trans­mit. There are many dif­fer­ent such algo­rithms because dif­fer­ent types of files are best com­pressed in dif­fer­ent ways. Sym­bol­ic data (such as text, num­bers or an exe­cutable pro­gram) com­press­es dif­fer­ent­ly than image data, and has a greater need to be decom­pressed with no data loss. Col­or images com­press dif­fer­ent­ly from black and white images. Images com­press dif­fer­ent­ly from audio. Images and audio are far more tol­er­ant to “lossy” com­pres­sion tech­niques since they only need to accom­mo­date the thresh­old of the human eye and ear.

Every sin­gle “anom­aly” iden­ti­fied by the fleet of ama­teur Birther “image ana­lysts” in an attempt to dis­cred­it the Oba­ma long form is direct­ly attrib­ut­able to the com­pres­sion algo­rithms applied to the PDF between scan­ning and pub­lish­ing on-line. None of them are signs of forgery or fraud, and all of them are so obvi­ous­ly gen­er­at­ed by a non-human process that it is some­times dif­fi­cult to cred­it even ama­teur sleuths with sim­ply not notic­ing them.

The Lay­ers:

A PDF (as opposed to a JPEG or most oth­er pure “raster” images) is a com­plex file for­mat, capa­ble of con­tain­ing raster, vec­tor and sym­bol­ic data all with­in the same file. As such, it pro­vid­ed the soft­ware design­ers who invent­ed it with both a com­pres­sion chal­lenge and a soft­ware design oppor­tu­ni­ty. The chal­lenge was that a sin­gle com­pres­sion algo­rithm would not work well for every PDF. The oppor­tu­ni­ty was not sim­ply to allow dif­fer­ent algo­rithms for dif­fer­ent PDFs, but to also cre­ate a process that actu­al­ly decon­struct­ed even a “flat” PDF into dif­fer­ent com­po­nents that could use dif­fer­ent com­pres­sion algo­rithms at the same time.

The “lay­ers” that ama­teur Birther “image ana­lysts” have found in the long form PDF are arti­facts cre­at­ed by this com­pres­sion process. They are not the results of a human “forg­er” assem­bling a dig­i­tal doc­u­ment which was then print­ed on paper, they are the result of a paper doc­u­ment being dis­as­sem­bled by a com­put­er algo­rithm for eas­i­er stor­age and trans­mis­sion on-line.

That this is what hap­pened here can be clear­ly seen by look­ing at the details of the “lay­ers” dis­cov­ered in Adobe Illus­tra­tor. The col­or com­po­nents of the image appear in one “lay­er” and the oth­er large “lay­er” is com­plete­ly black and white show­ing that the algo­rithm stripped the black and white com­po­nents out of the total image for a dif­fer­ent com­pres­sion process.

Look at the “col­or” lay­er and notice the fol­low­ing details.

Image

First… it con­tains almost every col­or com­po­nent of the PDF. The oth­er “large” lay­er is black and white… pos­sess­ing not even shades of gray. (There are only five oth­er tiny objects that were also stored in col­or… pri­mar­i­ly the date and cer­ti­fi­ca­tion stamps… fea­tures that also were not iden­ti­fied by the com­pres­sion process as per­fect­ly black.)

And that’s the key for how fea­tures were sep­a­rat­ed. The very few “black” details that remain in the col­or lay­ers are not actu­al­ly black. Blow­ing up the detail to get a look at the indi­vid­ual pix­els shows that they are com­posed of pix­els of many dif­fer­ent col­ors… with few of them being sim­ply black. Exam­ine for exam­ple the sin­gle dig­it of the Cer­tifi­cate num­ber that was not stripped out of this lay­er. This is the last “1” in the num­ber.

Where­as all the oth­er dig­its are com­posed com­plete­ly of pix­els that are a sin­gle col­or black, this oth­er dig­it (from the same num­ber) is com­posed of many dif­fer­ent shades of gray and green. It takes a lot more infor­ma­tion to store that col­or image of the num­ber “1” than it does to store the mono­chrome num­ber “1s” that appears just four and five dig­its ear­li­er in the same num­ber. It was com­pressed in col­or, while the rest of the num­ber was stripped out into a dif­fer­ent lay­er that was com­pressed entire­ly (and far more effi­cient­ly) in black and white.

The sec­ond large lay­er is per­fect­ly mono­chrome, con­sist­ing of pix­els that are only white or only black. There are no shades of gray or hints of the green secu­ri­ty image. Such an image is very easy to com­press to a tiny frac­tion of its orig­i­nal size thus mak­ing it an obvi­ous set of fea­tures for a com­put­er pro­gram to strip out and treat as a dif­fer­ent object with a dif­fer­ent com­pres­sion algo­rithm.

One of the ways we know these two “lay­ers” were cre­at­ed by the com­pres­sion process from an orig­i­nal sin­gle lay­er is that there is absolute­ly no over­lap between the lay­ers. Every sin­gle pix­el in the black and white “lay­er” falls onto an emp­ty (white) pix­el on the col­or lay­er. In the col­or lay­er these fea­tures appear as white “shad­ows” where the black fea­tures were stripped out. If these were gen­uine­ly dif­fer­ent lay­ers cre­at­ed by a human forg­er, it would be expect­ed that at least some of the black pix­els should over­lap green pix­els from the secu­ri­ty paper lay­er below. There are more than 2 mil­lion pix­els in this PDF. Not a sin­gle one of them has two pix­els over­lap­ping.

Anoth­er way we can tell the com­pres­sion process was per­formed by a com­put­er and not a human forg­er is that the lay­ers them­selves do not even make sense from the per­spec­tive of human forgery. The final dig­it in the Cer­tifi­cate Num­ber is the most obvi­ous exam­ple of where objects are split between two dif­fer­ent lay­ers in a way that doesn’t make obvi­ous sense. A more sub­tle but even more telling exam­ple is in the sig­na­tures. The entire sig­na­ture of the Local Reg­is­trar appears on the col­or “lay­er” except for a sin­gle cur­sive let­ter “i” that appears on the black “lay­er.” A human forg­er would be expect­ed to cre­ate (or extract) a sig­na­ture as a sin­gle unit and them lay­er it onto the forged image. But who would ever forge a sig­na­ture in mul­ti­ple parts, plac­ing every let­ter except one as part of one large col­or object, and then put a sin­gle let­ter of that sig­na­ture in a com­plete­ly dif­fer­ent large black and white object?

Of course, they wouldn’t.

But a mind­less com­put­er algo­rithm instruct­ed to strip out every­thing that looked per­fect­ly black would show many such weird choic­es, unable to intel­li­gent­ly under­stand that the let­ter had any­thing to do with a larg­er thing called a “sig­na­ture.” It stood alone. It was black. It became part of the black and white lay­er while the rest remained in col­or.

The bot­tom line is that the “lay­ers” found by ama­teur Birther “image ana­lysts” are com­plete­ly unlike what would be expect­ed from an actu­al forgery. They are sim­ply the ordi­nary results of a scanned doc­u­ment being opti­mized as a dig­i­tal file. Even very con­ser­v­a­tive sources such as WND, Nation­al Review Online and Fox News have reached that con­clu­sion.

But here is where it gets fun… Miss Tickley’s “dis­cov­ery” of iden­ti­cal pix­el pat­terns across the birth cer­tifi­cate. How in God’s name did that hap­pen?

Iden­ti­cal Pix­el Pat­terns:

Among the rea­sons a black and white image can com­press so much more effi­cient­ly than a col­or image is not mere­ly because it only has to account for two col­ors; black and white. The vast­ly sim­pli­fied image also pro­vides oppor­tu­ni­ties to search for rep­e­ti­tion and redun­dan­cy. If parts of the image are iden­ti­cal, the com­pres­sion algo­rithm can store those mul­ti­ple parts a sin­gle time rather than three or four or a dozen dif­fer­ent times. An iden­ti­cal let­ter (for exam­ple) that is repeat­ed 40 times can be stored in 1/40th of the file space as 40 iden­ti­cal let­ters stored 40 times.

So the black and white com­pres­sion algo­rithm search­es for objects or pat­terns on the image those are close enough to iden­ti­cal that they can then be tagged and stored that way.

This is called a “lossy” com­pres­sion algo­rithm, because it actu­al­ly does “lose” infor­ma­tion in the effort to store it most effi­cient­ly. It counts on the fact that human eye would nev­er have been able to tell those objects apart any­way, so the data lost is not mean­ing­ful. In the com­pres­sion process it actu­al­ly com­pares the objects (two check box­es for exam­ple, or two let­ters “T”) and deter­mines that if they are close enough, they will be stored as two iden­ti­cal objects a sin­gle time.

As in the sep­a­ra­tion between “lay­ers” already dis­cussed above, the choic­es made by the com­put­er are mind­less and there­fore often don’t make sense from a human per­spec­tive. Look for exam­ple at the check box­es iden­ti­fied by Miss Tick­ley.

Image

Two of those check box­es are pix­el for pix­el iden­ti­cal. This could cer­tain­ly be a result either of a human being cut­ting and past­ing the same check box mul­ti­ple times or a com­pres­sion algo­rithm decid­ing the objects were close enough to make them iden­ti­cal for stor­age. But what then about that third check box? The third box is not iden­ti­cal, which doesn’t make sense if this was human forg­er cre­at­ing the doc­u­ment from scratch. A human would most like­ly sim­ply reuse the same image over and over… cer­tain­ly use the same one to place three such box­es so close togeth­er on the form.

A mind­less com­put­er algo­rithm doesn’t even know what a “check box” is, and cer­tain­ly doesn’t care about how close they might be togeth­er on a “form” that it also doesn’t under­stand. It did not rec­og­nize that third check box as being close enough to store them as iden­ti­cal. Cer­tain­ly, to your eye and mine they look the same. But the com­put­ers do not do what we want them to do, they do what we tell them to do. And the algo­rithm we wrote told the com­put­er that this box was dif­fer­ent enough to be stored sep­a­rate­ly.

Again… the bot­tom line is not just that the (same) expla­na­tion of com­pres­sion algo­rithms accounts for both the lay­ers and the iden­ti­cal “objects” and let­ters found on the form, but that human forgery does not. No human forg­er would ever cre­ate the suite of char­ac­ter­is­tics seen in this PDF and “dis­cov­ered” by our fleet of ama­teur sleuths. On the con­trary, they are fur­ther evi­dence that the long form birth cer­tifi­cate released by pres­i­dent Oba­ma is absolute­ly authen­tic.

Falsehoods Unchallenged Only Fester and Grow