Testing the New Piezography Ultra HD Matte Black

The second “small batch” run of the new Piezography Pro inks are now available, and with it the new Piezography Ultra HD Matte Black is available for individual sale to use with existing K6 and K7 ink sets. The sales and social media posts about the new ink was reported to have a Dmax of about 1.8, which is incredible for unpolarized measurements from matte inkjet prints, but there haven’t been any outside tests or reviews of the new ink (as far as I have searched). I was eager to load it up and make a few tests myself and compare it to some of the existing matte black inks out there (I haven’t tested HP or Canon inks with QTR yet so these tests were limited to most common QTR compatible inks). 

I also should first note that around the time the PiezoPro inks were coming out I created my own custom K5+LcLmY set for specialized toning setups. I invested a lot of time and energy building different toning profiles and it didn’t make sense to switch the printer used in these tests over to the dual quad PiezoPro inkset. I am planning on installing that in a partially clogged 9900 when I get around to swapping out the capping station and damper assembly—that is a whole other situation….

Back to the 3880. Before switching the Shade 1 MK cartridge in my current setup I did a quick test comparing several other existing carbon matte black inks to the new Piezography Pro Ultra HD Matte Black.  You can read about some testing I did a few years ago with the existing Piezography Shade 1 Matte Black and Eboni Matte Black (version 1) in this post Comparing Cone Piezography Shade 1 and MIS Ebony Shade 1. This time around I set out to test the Original Epson UltraChome MK from the x800-x880, the new HD Matte Black in the P800 (and all the other new Sure Color series printers), and the STS MK I have been using in the small format 1430 I use for other kinds of testing (the STS MK didn’t make it into the 3880).

 Mobius Arch in Black and White, Alabama Hills, Lone Pine, California

Making the Switch

How you actually swap out the cartridges and inks will depend somewhat on your printer model. The small format printers like the 1430 or 2880 that do not have ink lines will only require swapping out the refillable MK cartridge and a quick head cleaning procedure and nozzle check to make sure the ink is flowing properly. 

The larger format printers will take more time to clear the old ink from the lines, and the best way to clear them is to print an ink purge sheet from using QuadToneRIP Calibration Mode. This is especially true for the x990 printers where the old method of using 2-3 power cleaning cycles is not recommended (and you are throwing away ink from all 10 channels rather than just the one you are switching out.) In my case, I haven’t printed with the 3880 in a few weeks and wanted to clear out any sedimentation in the lines so I did one power clean cycle and the three 8x8 inch purge sheets seemed to do it.  (I measured the densities of each purge sheet at the top and bottom of the page and stopped after the density didn’t change between the last two sheets.)

Even after only one power cleaning and part-way through printing the first 8x8 black channel purge sheet it was obvious new new black was a lot denser. How much denser? That is what spectrophotometers and spreadsheets are for… 

As soon as I got the old Epson MK cleared from the lines I wanted to do a quick relinearization of some existing curves I had for the K5 inkset just to see what impact the new black had on a quick test print. The shadows and deep blacks in the print were denser, but the relinearization wasn’t as smooth as I would like (as in it wasn’t perfect) and I wanted to see what this ink could really do with my QuadToneProfiler tools. The other benefit of starting from scratch is that you can see exactly where the sweet spot in the ink limit and carefully define how evenly each of the diluted inks are distributed throughout the scale.

STS MK on left vs. new Piezography UHDMK on right

It is a welcome surprise to see this new ink does not have the same “oily” reflectance the STS MK has, and does not get lighter when the maximum ink load passes its optimal level. This is a distinct difference from the STS MK which results in lower reflected densities as the ink limit increases, and, even when there is no measurable tonal reversal you will still often see a noticeable shift when the STS MK hits the 100% step. The nice thing is new Piezography Ultra HD MK continues to increase in density all the way to 95%-100% (although there might be little to actually be gained when setting the limit that high due to excessive bleed from such a high ink limit and the increased overlap of the Shade 2 ink. I actually did test this and I found that an overall K ink limit of 55 and a K Boost setting of 75-80 was producing a Dmax of 1.77-1.78, and there was no difference after manually setting the Dmax quad value in the black channel to 95%).

 

Testing the Four Matte Black Inks

Testing Methods

The tests were pretty simple in that they were just measurements of a 21-step grayscale using the K channel from each ink separation image. I used an average of four samples per patch for each ink. These prints were force-air dried for two minutes where I took and initial measurement, and the left to air dry for 24-36 hours before taking the final measurements used in the graphs. I usedHahnemühle Museum Etching to make these tests. There is a slight tooth to this paper, which in some cases could a result from increased random scattering of the light and densities appearing lower they actually are. A quick test of the UHDMK on Hahnemühle Photo Rag resulted in similar densities to the Museum Etching so I didn’t want to take the extra time to test multiple papers from the same manufacturer for this first round.

The Measurements

The new Piezography Ultra HD Matte Black is clearly the densest, closely followed by the new Epson MK used in the new Sure Color series printers. There is no current solution for using third party inks in the new Sure Color printers so those making prints with the UC K3 inks will be satisfied with the advance in the Epson Inks as far as Dmax goes. However, if you are able to use third party inks in refillable cartridges, there is no reason to use any other ink than the new Piezography UHDMK. 

Measurements from a Linearized Custom Quad Curve

Looking at measurements from an ink separation image can be insightful, but the Dmax from the overlapping shades will usually not match what you set in the ink limit for the K channel. This is usually because of the additional overlap from the next lighter shade at the very end of the scale, and there needs to be a careful balance of getting the highest possible density while preventing excessive bleed, and a slightly lower density from overlap of the diluted gray inks. In the two Hahnemühle papers I tested I was able to maintain a Dmax of 1.77-1.78 (L* ~13.7) 1.81 (L* ~12.92) where all the other inks were the best to around a Dmax of about 1.65 (L* ~16.7). The best I was ever able to do with the STS MK was with Museo Portfolio Rag that maxed out at1.7 (L* ~15.45).

 

Creating Custom Quad Curves

If you do switch inks there are a few options you have for linearizing and profile making, and which one you choose will depend on how far you want to dive into the process, how much money you are willing to spend on it, and if you have the ability to measure your own printed targets.

  • QTR-Linearize-Quad($50 and included with the QTR license): If you do have the ability to measure your own targets you could use the built-in QTR-Linearize-Quad applet and a 21-step measurement file to get a decent general linearization without spending any additional money. This might be the least-best option because 21-steps are usually not enough to get the right amount of separation in the higher and lower densities from existing quad curves. 
  • My QuadLin Service ($45): If you do not have the ability to measure your own printed target I offer an affordable service that will relinearize your existing curves. You print the standard 51-step target to me and I will make a linear and modified gamma-adjusted set of curves. I use my own error correction and linearization methods that produce prints that are indistinguishable from the Piezography methods (please note that will not make custom curves from the Piezography master curves). 
  • My exclusive QuadToneProfiler Deluxe Edition and QuadLin curve creation tools ($50-$90): These Microsoft Excel-based tools allow you to make your own custom media settings using automated formulas, create smoother bezier-shaped master curves than the standard QTR methods create, and includes an advanced single-step linearization functions with real time quad curve previews (including built-in error correction tools for use with only 51-step measurements that do not rely on the QTR-Linearize-Quad app).
  • Custom Piezography Curves ($100 per profile)
  • The Piezography Professional Tools ($150 for a 1-year license)

No matter which linearization system you use the increased density of the new ink is a welcome addition to the advance in fine art black and white inked printing, and Jon Cone and all the folks at Cone Editions Press and Inkjetmall are owed a warm sense of gratitude for their continued commitment to pushing the state of the art. 


About This Photograph

If you get my email newsletter you might remember that I spent some time out West this winter. Most of the trip was spent in and around Los Angeles for PhotoLA and doing press checks on the last 4 books in the Portfolios of Brett Weston Series. But in there I made some much-needed time to photograph in the Eastern Sierra and Death Valley. 

This is one of the photographs I made with the Leica Monochrome on the last day of the trip before heading to the airport to teach a one-on-one workshop in the Midwest.

The last morning in the Owens Valley was the one of the best I've experienced in the nearly 20 years I have been traveling to the Eastern Sierra to photograph, and I stayed to photograph much longer than I should have which made for a stressful drive back to LAX to catch my flight out. I think it was worth it. This is one of the first photographs I printed with the new Ultra HD Matte Black ink and I think it really shows how the rich deep black enhances the feeling of the subtle midtone contrast while also allowing you to see deeper into the shadow around the edges of the arch. The smoothness of the K5 gray inks are crucial in separating the delicate clouds, and the added warmth of the toning inks gives a greater sense of depth than the Warn Neutral inks do on their own. 

Size:
Quantity:
Add To Cart

I am offering this photograph as a discounted example print until the end of April. Each one is printed on Hahnemühle Museum Etching (350gsm) and are offered in two sizes: 7x10 inches for $50 (discounted from $250) and 12x17 inches for $200 (discounted from $750). Mounting and framing is also offered for an additional charge.

 

 

The QTR Black Boost Setting

K boost formulas

I am working away on some of the finer details of my QuadToneRIP book and there was a recent question on the QTR Yahoo Group about how the K_Boost setting works when making custom QTR Profiles. I've been meaning to dig in to see exactly what is being affected with this setting, rather than just say "it affects the tones at the shadow end of the scale". So I poured a pot of coffee and spent my morning working out what I thought was really going on here with the K_Boost setting. I don't have the formula that matches the output from the QTR curve generation program, but here is the gist of what I see happening.

Basically, The boost setting is not JUST affecting the end of the scale, from 80% to 100%, but is actually applying some added density along the entire scale based on an overall gamma adjustment or predefined curve. The lower graph shows the percentage difference between the boosted and non-boosted curves. It is based on a boost setting 20% higher than the ink limit, and that is proportioned along the whole grayscale. 

Ink Limits and Boost Settings

Your ink limit is some % of 100, and your boost is some larger % of 100 which is then multiplied by 65535 (just the 16-bit value of the total possible ink load). That is pretty obvious just from looking at the total ink limit for any ink channel or by opening the quad curve in a text editor. The next part is where it gets tricky. 

You take the difference of the black ink limit and black boost values. In the example I used the limit was 50 ( or 0.5\*65535) and the boost was 60 ( or 0.6\*65535), and the difference is 6553. 

That difference is divided into 256 steps, and each step then has some gamma adjustment applied to it (I used 2.07 in my example and to get fairly close to the QTR generated curve). The adjusted boost value for each of the 256 steps is then added to the pre-boosted K values. So it really isn't *only* adjusting the shadow end of the scale. It is adding some proportion of the boost to the entire grayscale, but since there is a gamma adjustment applied to it, it is showing more of the effect at the end of the scale.

I played with the inputs a little more and compared them to the QTR-generated curves, and I expect there is something other than a simple gamma adjustment being applied. There might just be some predefined curve built into the program that is scaled to whatever the difference between the Limit and the Boost is, but that is a black box and there is no public documentation of the exact curve or fomula. 

Here is a few graphs comparing the QTR generated boost and the one with my gamma settings. They are not exact, but I hope it illustrates some of what is happening. I only graphed a single partition K curve (essentially the Gray Curve, and without Gray shadow/gray highlight/gray gamma settings)

 

Happy Birthday — After Edward Weston

Maybe it was the 10 years of psychotherapy, but I spend a lot of time thinking, “how did I get here?”

I do that with photography too.

In a large part, I have Edward Weston to thank. The image of him living simply, being solely dedicated to his work, and photographing whatever was of interest to him were some of my earliest motivations for the path I took. Not only with the kind of photographs I wanted to make and the equipment I wanted to make them with, but to the lifestyle (and the kind of work I wasn’t willing to make). I don’t have to imagine that it was very similar for many other photographers as well.

Even if it goes without noticing, the urge to photograph so many of the iconic places, we have, as large format landscape photographers, photographed in the past 50 years has been a direct result of the journeys Edward Weston had made in 1937. The first photographer to receive a Guggenheim fellowship, at the age of 49, he traveled over 16,000 miles throughout California (and the West) with his 8x10 camera, his partner Charis who he married in 1939, and usually also accompanied by his son Brett. Now, nearly 80 years later, those trips through California continue to inspire photographers to wander the desert, the coast roads, the High Sierra and its foothills, seeking out their own sense of discovery and the pulse of life itself.

I don’t know how many times I loaded the truck with enough food and 8x10 film and holders (and twice and much water) to live on for several days of wandering and photographing, but I always knew why I was doing it.

About this Photograph

This is a photograph from back in 2004 at Zabriskie Point, on one of those such trips. Most of my work from that time are still only available as 8x10 gelatin silver contact prints, but I am now drum scanning that older work so I can begin to make enlarged platinum/palladium prints in the new darkroom (more on that later).

 

BLACK AND WHITE MASTERY AT LODIMA DIGITAL

I’m happy to announce that I am offering my custom drum scanning and printing services through Lodima Digital. I have been working closely with Michael A. Smith and Paula Chamlee and their publishing company, Lodima Press, since 2002.

We have set up a new digital scanning and printing studio at their unique Bucks County studio, where I run the Screen SG 8060P drum scanner and several large-format printers for producing black and white and color prints that are often bound for exhibitions, museums, and private collections. I take the highest pride in the quality of the prints I produce, and bring that highest standard to every image a photographer sends me to print.

Along with printing images, I also work closely with photographers in scanning negatives and prints to bring the most out of their images and files, and collaborate with them to produce prints that execute their vision.

SCANNING

 

Not all drum scanners are created equal, and the Screen SG 8060P is one of the best ever made. Original prints and negatives up to 20x24 inches can be scanned at up to 12,000 samples per inch in 16-bits per channel.

The scanner is only as good as the operator, and scanning negatives is a particular challenge on drum scanner and their dedicated software originally designed for scanning positive transparencies and prints. I've found that the plug-ins that were developed for scanning negatives are unreliable and don't do as good of a job as manual set-up, inversion, and contrast correction. I have created custom profiles and input curves for black and white negatives with various density and contrast ranges. I also perform individual set-ups and a low-resolution scan at 150-300SPI to proof the inversion and contrast correction before starting the full resolution scan.

SCANNING RESOLUTION AND PRICING

8000 to 12,000 SPI is usually overkill for anything larger than medium-format films, but we can accommodate large- and ultra large-format negatives and unmounted prints. Large-format negatives will be scanned at 1200-4800 SPI, depending on the size of original and expected final print size.

Pricing is determined by the size of the original and the input resolution, and include initial contrast adjustment and basic dust removal. Additional retouching and restoration work will be quoted on an individual basis. Scans are provided via Dropbox as soon as they are completed and as a DVD, included with the return of the originals.

Original Size 2000 SPI 4000 SPI 8000 SPI 12000 SPI
35mm (24mmx36mm) - $35 $50 $80
X-Pan (24mmx64mm) - $45 $60 $90
6x4.5cm, 6x6cm, 6x7cm - $50 $80 $125
6x17cm $50 $80 - -
4x5 Inches $75 $125 - -
5x7 Inches $85 $125 - -
8x20 Inches $125 $200 - -
8x10 Inches $125 $200 - -

PRINTING

My passion is printing black and white, and have spent considerable time profiling and calibrating the different printers for a number of media and ink combinations to meet the needs photographers might have for their images. I prefer to work directly with photographers when processing and preparing their images to print and work to get a paper and print color that best matches the rest of their work.

Print Size Price of 1st Print Each Additional Print
8.5x11 Inches $17.50 $13
11x14 Inches $34 $25
17x22 Inches $65 $50
24x30 Inches $120 $95
32x44 Inches $235 $175
44x56 Inches $340 $300
44x80 Inches $475 $400

RETOUCHING AND RESTORATION

I offer retouching for offset reproduction as well as digital restoration of old damaged originals, including scanning and restoring glass plates. Please contact me privately to discuss you individual needs and to get an idea for the scope of the project and the estimated cost. 


SCANNING, PRINTING, AND RETOUCHING INQUIRY FORM

Name *
Name
Phone
Phone
Please select service required from the options below
Please enter any specific needs including original size and final print size for scanning, printing requirements, or retouching needed

Special Capture One Pro Print Offer

I’m offering a special limited edition print of the image featured in the my guest post on Phase One’s Image Quality Professor blog.

Joshua Tree National Park, 2012
125.00
Quantity:
Add To Cart

Each print is made on Canson Edition Etching with my custom-blended split tone Piezography K6 ink set. The image size is 6 3/4 x 10 inches and is mounted and overmatted to 13 x 15 inches. This image is only being printed in this edition and is limited to 100 signed and numbered copies. 


Black and White Mastery with Capture One Pro 9

If you are interested in learning more about using Capture One Pro for black and white see this page information on my one-on-one course. 


New and Improved i1 Profiler Workflow for QuadToneRIP

I posted a few different i1 Profiler workflows earlier this year for targets used in QTR profiling process. Since then I have done a little more experimenting, and it is time to post an updated workflow that makes measuring different kinds of charts easier and less prone to errors. The previous workflows required specially formatted patch charts and reference files for i1 Profiler. I actually held off using i1 Profiler for a long time because I thought it was trying to be too smart with the kind of targets and the patch layout, and the way they are measures. I was doing everything I could think of to try to trick it into working the way I needed it to for my QTR profiling process. Then I stumbled onto something that changed everything and makes i1 Profiler much more flexible.

With this new workflow, you can use the same step wedges that came with QuadToneRIP and completely ignore the reference file. That is the huge difference from the old way of working with MeasureTool and how I previously understood i1 Profiler to work with reference files. This new method allows you to simply define the structure of your chart and then measure it. Since there is no expected measurement value for any patch it cuts down on the “expected to find 21 patches but only read 19” kinds of errors. I still use the older i1 Pro measurement device and have had no problems with this way of working setting up a couple new printers, and have been teaching it in a few private QTR workshops for the past few weeks.

This is not limited to creating custom QTR curves. This workflow and the measurement files it produces can be used for creating custom gray scale ICC profiles for Epson ABW, Canon, or HP printers. However, you will need to the QTR download to get the QTR-Create-ICC-RGB application

Step-by-Step Instructions

Choose the chart you want to measure.

The most often-used step wedge in QTR is the 5%, 21-step target—either the 21x4 random or the simple 21-step ramp. The 21-random target was made to prevent patch-to-patch errors that can happen if there are densities in the standard 21-step target that are too close together and that the software thinks is a single patch. If you use a step wedge with gaps between each patch then you shouldn't need the random target, but it can still be a good idea because it prints each patch 4 times, and averages the densities of each of them.

Printing the target

If you are reading this it is a good bet you have an application in mind for the measurement data so I won’t go into too much detail about how or where to print your targets. I will just say that I prefer to use PrintTool on the Mac for printing test targets so I can control when and how color management is being applied before printing.

Once the target is printed you should let it dry for a few hours or overnight (or blast it w its a hair dryer for a minute or two) and then move on to i1 Profiler.

Using i1 Profiler

Launch i1 Profiler and choose the advanced radio button. Select CMYK printer, and the measure chart option.

You should see a blank chart, with options to define the chart structure. In the previous workflows I said to load a reference file to define the chart. Ignore that, and just type in the number of rows and columns in your chart.

The 21x4 random is 4 rows and 21 columns, and the 51-step chart is 3 rows and 17 columns.

If you want to use my 51x3 linearity checker use 9 rows and 17 columns, and measure the whole 51-step target 3 times. I think you will find the chart is easier to read in strip mode.

Saving the Measurement Data

You will see the option for saving page data near the bottom left of the window. Once you click that you can navigate to you measurement files folder, and create a file name that makes sense for the printer/paper/inks and the chart you are working with. Then make sure you save the chart as CGATS* (Custom) in the file format dropdown box. The next window will give you options for the measurement data to include in the file.

What data do you really need

There are a few different things that are important for using my QTR workflow, which will be detailed in my forthcoming QTR book. But for general QTR use, you will need the Sample ID, Sample Name, and L*a*b*. If you are using any of the spreadsheet templates I have posted you will also need to save the Location Info from the dropdown box on the right, as well as the XYZ data. The Location info gives the row and column position of each of the patches, and I use that location info in the spreadsheets for looking up the different patch readings. The XYZ info is included to use the XYZ_Y readings for easily calculating Density (without going through a conversion of L*a*b*_L* to XYZ_Y to Density). The QTR-Linearize-Data and QTR-Create-ICC scripts will do this L*a*b* to density conversion for you if you do not have the XYZ data.

You can see the screenshot with the rest of the default options I leave alone in the lower portion of the window. Once you hit ok, it will create the measurement .txt file that you can open in a text editor, in Excel, or run through the various QTR applications. Using the measurement data Now that you have saved the data you need to navigate to it. This is when having it organized and saved in a different location comes in handy.

I mentioned this in the previous posts, but it bears repeating. i1 Profiler will want to save the measurement files in its default location, which is buried in the application support folder, and it won’t remember where you saved any previous measurements if it wasn’t in that default folder. I encourage everyone to create a different folder for all your measurement files in your documents directory, or the desktop, or some other easy to get to place on your hard drive. Then within that folder create different folders for each of the different papers you will use. Working with the measurement files will be a lot easier if they are organized and easily accessible for however you are going to use them later in the process, or referring back to them at some point in the future.

Using the Measurement DAta

If you are at the QTR Linearization or confirming successful linearization steps make sure you launch the QTR-Linearize-Data application (some people call it a droplet or script) and then drag the measurement file onto the QTR-Linearize-Data icon in the dock, and you will get an new measurement-file-name-out.txt file that graphs the measurement data and give you a string to paste into the QIDF text file.

You can use the same original measurement data files with the QTR-Create-ICC application. It works similarly to the QTR-Linearize-Data application, but also creates an ICC profile that you can use for color management and soft proofing.

I will be posting more Excel templates that have been updated and formatted to for this new workflow. Until then, you can just select and copy the cells with the XYZ through the Lab* data and then paste into those cells in the current templates.

Black and White Conversions - Part 1: A Short History/Science Lesson

How is it possible that I have gone more than a year without a single post dedicated to black and white conversion techniques? Black and white conversion is a large and sometimes complicated subject with many different approaches and differing opinions on which one is the “best” or most effective.

Since everything we do digitally is based in some part on how the medium's analog technology evolved over the years, we owe it to ourselves (and to everyone before us who made what we do possible) to have a basic understanding of that history and some of the key points in its development. How traditional color filtration works - A short history and science lesson

Gustave le Gray - Seascape, study of clouds
Between 1856 and 1857 - Albumen print from a collodion glass negative H. 32; W. 39 cm

Why do it? Part of the history lesson…

Before panchromatic film (which was designed to be sensitive to all of the visual spectrum) there were silver collodion glass plates, which were sensitive to to about 400-520 nm, mostly sensitive to the higher ultraviolet and blue part of the spectrum, barely sensitive to the blue-green portion, and not at all sensitive to the green-yellow-red part of the spectrum. The photographs made with this process have a unique look, and usually have a pure white sky. There are some interesting compositional challenges because of this, which lead to the some of the first composites. Gustave Le Gray (btw, what a GREAT name for a photographer) would use underexposed sky from one negative and composite it into a print with a different negative—you try doing that without Photoshop…

When panchromatic films came around they were sensitive to the whole visual spectrum. This lead to some other problems with focus from chromatic aberrations and diffraction, but it also allowed for some interesting technical and creative controls. Photographers could now choose how much of the visual spectrum was used to expose the film. These creative controls are usually implemented to remove atmospheric haze, increase the separation of tones in skies (or make them really dramatic), increase the tonal separation or exaggerate the appearance of foliage, or separate tones based on color in rocks. Usually yellow, orange, red, or green filters were used, but a blue filter could be used to mimic the look of those 19th century photographs made with blue-sensitive films. Foliage can have a different and distinct look when photographed with a blue filter, which can be approximated digitally with color originals (because it is also possible to do this kind of black and white conversion with scanned color negative films and transparencies).

And now for the 10 cent science lesson

When working with traditional panchromatic black and white film you would often increase contrast in the scene by filtering certain wavelengths of light from exposing the film. This could be done with any number of color filters, for many different reasons, but they are all based on the same principle. There is a lot of math and physics involved in color science, and everything we do with digital cameras and computers owes existence to this field. Since I’m not a mathematician or color scientist I won’t go into any of that, but it is a good idea to have a basic understanding of how we see, how the camera/film sees, and how filtration works, so we can apply it to working with digital cameras and editing software.

All light starts as “pure” white, and is really just a combination of all parts of the electromagnetic spectrum, most of which we cannot even actually see. We “see” color as the part of the visible spectrum that is reflected back to us, and we don’t see the part of the spectrum that the material absorbs.

Panchromatic filters function the same way as any other object—additives of the glass absorb part of the spectrum and reflect back what is not absorbed. This becomes useful in black and white photography because the filters are made of optical glass, so the same colors that we see reflected back to us are also passing through to expose the film. The part of the spectrum we want to prevent from exposing the film is “stopped” before passing through the lens. In the next post I will go into how these kind of effects can be applied digitally from your color files.

Process, Abstraction, and the Edge of Disbelief

The nature of the process and creative constraints

Garry Winogrand, Hollywood and Vine, Los Angeles,  1969

Garry Winogrand, Hollywood and Vine, Los Angeles,  1969

I like to think of photographing as a two-way act of respect. Respect for the medium by letting it do what it does best, describe. And respect for the subject by describing it as it is. A photograph must be responsible to both.... — Garry Winogrand

I take that idea of respect for the medium as extending to the nature of the process itself, and with digital photography, that means not introducing artificial analog artifacts that you might find specific to toy cameras, hand coated processes, or the kinds of edge defects you might find with Polaroids or wet plate processes. You can't get very far without finding holes in that argument, and I'm ok with that. All art making is inherently personal and the idea of respecting the nature of the medium is simply something I have used to set limits on what I am willing to do when creating my own work.

I realize that with digital photography, it can be argued that its nature doesn't actually exist outside of the arrangement of electromagnetic charges, which can be endlessly manipulated to fit whatever the artist's vision might produce. If part of your personal approach is blending aspects of other processes, great, but for my work and how I teach, I tend to shy away from that. I prefer to work with what was in front of the lens, and the "believable" tonal edits that are an extension of the larger tradition.

That idea of “believability“ is something I have been chewing on for a while, and why some people are often drawn to black and white photography, and abstraction in particular. It is something I call the "Edge of Disbelief.”

Black and White and the Edge of Disbelief

Over the course of the last 150 years, we have come to understand and perceive photographs as representations of reality, in the sense that what was in the picture was (more or less) what was in front of the lens. I also think that is partly what makes landscape and abstract photography so interesting. A photograph can be a window into the world, a look through the window in which the maker sees the world, and into the maker themselves.

Lincoln County, Colorado, 1977 Robert Adams from the exhibition Landscapes of Harmony and Dissonance

Lincoln County, Colorado, 1977
Robert Adams from the exhibition Landscapes of Harmony and Dissonance

Why People Photograph and Beauty in Photography, both by Robert Adams, are two of my most loved books on photography. One particularly relevant quote from Why People Photograph has to do with the wonder of what was there in front of the lens:

At our best and most fortunate we make pictures because of what stands in front of the camera, to honor what is greater and more interesting than we are. We never accomplish this perfectly, though in return we are given something perfect--a sense of inclusion. Our subject thus redefines us, and is part of the biography by which we want to be known.

It is one thing to push against conventions, and another to push them over the edge of the cliff. Since we now have the power to easily and drastically alter what was in front of the lens, if the edits (vastly increased saturation, sharpness, contrast ranges, copied and pasted trees, etc.) are too pronounced it will look "unreal" and we have a hard time accepting it as a representation of the world. It goes over the edge of disbelief and we don't accept it. Simply put: if not done well, it will look wrong. I think that I what is behind the aversion to the Tray Radcliffe HDR approach. I also think that is why Jerry Ulseman's photographic creations are so interesting and why John Paul Caponigro's earlier digital creations are less so. It’s not exactly the same, but similar to why, when I see white dove wings cloned onto a "fine art nude," I roll my eyes, sigh dramatically, and hurry along to the next thing.

Abstraction

In* Beauty In Photography* Robert Adams writes about not photographing in color:

If, as a personal matter, I have chosen not to make color pictures, it is because I have remembered how hard it is to write good free verse, with which color photography has some similarities, both being close to what occurs naturally.

I’ve heard other people refer to the same idea and tend to agree that when working with black and white it begins as being one step closer to abstraction—already separate from the “real world.” Because of that, you are free from the tie to the reality that color photography represents, which can lead you to take more creative liberties when editing and printing.

Edge of San Timoteo, San Bernardino County, California Robert Adams from Los Angeles Spring

Edge of San Timoteo, San Bernardino County, California
Robert Adams from Los Angeles Spring

That idea might be why some people are more drawn to spatially ambiguous and abstract photographs than the "easier" and less visually dissonant photographs. In the more abstract photographs, there is something more to discover—some mystery that continues to hold the viewer’s attention over a longer period and repeated viewings. Some of the criticism of the "St. Ansel" landscapes and their derivations is that once the initial awe has worn off, there might be little left to sustain further interaction. I also think that it is why it is so hard to make color photographs that work well at that deeper level. I'm not saying that it can't be done with color photographs, but that you are more limited in the adjustments you can make because color can too quickly go over that edge of disbelief and simply look wrong. 

I recently went back and made better scans of a bunch of my older 8x10-inch contact prints. these were not all of the photographs from the last 15 years, but there are about fifty that I keep in the "showing set" and that have withstood the continuous culling over the years. In some cases it is because of the way they fill in the sequence in the showing set. There are about 50 print from between 2002-2012 that continue to hold my interest. Those are the ones where there was not only a strong sense of abstraction, but also a sense of belonging to a recognizable and believable world.

Black and White with Capture One Pro 8 - Part 2: Workflows

Getting Started - Basic Exposure and Contrast Adjustment in Capture One

Before we go into the black and white conversion techniques and adjustment layers in C1, it is good to first do some initial RAW conversion settings on the color image.

Starting Points

Base Characteristics

As you would probably imagine, and are most likely already accustomed to doing, take a look at the overall exposure of the image you are going to work on. If there are serious over or underexposure issues, correct that with the exposure settings first. However, before going too extreme with the exposure setting, check to see if there is a processing curve that is better suited to the image. Sometimes the default initial curve isn't appropriate for that particular image. If the shadows look too blocked up, choose the “film extra shadow” option. Conversely, if it feels too flat, choose the “film high contrast” setting. I tend to err on the side of too little contrast or a little flat in the initial processing; low contrast is easily corrected with an adjustment layer and increasing contrast is something that is best done gradually so you don’t overdo it. We want to start with a full range of tones, with midtone separation, but not overly contrasty. Assuming the exposure is ok, one of my first moves is to the High Dynamic Range Setting. Usually anything that says HDR makes me run away like my pants are on fire, but this option in C 1 is actually one of the main reasons I prefer it as a RAW converter. Adjusting this setting has a way of opening up shadows and bringing back detail in highlights that is easier, faster, or and more beautiful than in other RAW converters. The initial Dynamic Range settings are usually between 10-50 and sometimes more for the highlight setting depending on the scene.

Clarity and Structure

The update last year to Capture 1 pro 8 brought us the addition of "natural" to the Clarity and Structure settings, which I think is much nicer than the previous options of Punch/Classic/Neutral. The Natural setting feels better and less processed than the other options.

Clarity

Now I will usually go down to Clarity and Structure and give those a boost. As I mentioned, I don't like to overdo the contrast too quickly, so I keep the clarity setting between 24-36. If certain parts of the image need more clarity, I will do it selectively with adjustment layers

Structure

I tend to keep the structure setting fairly low. I think the halo around sharpened edges is a dead giveaway of bad inkjet prints, and keeping the initial structure setting low can help prevent that problem. We want just enough to make it crisp but so much that it makes it appear brittle.

Basic Black and White Conversion

The prebuilt color filtration settings are decent and are usually good starting points, but I prefer to create my own and use those as a starting point for each image (naturally).

Creating Custom Preset Conversion Filters

The presets I create for the black and white conversion settings are based on the work I’ve done in Photoshop and Lightroom, but the adjustments don’t need to be as extreme when working in Capture One. Most of my presets have a similar shape with a stepped pattern from one color range to the next. The idea here is that there is never a large jump that could lead to a posterized look. Here is an example of a yellow-green style filter I use for pictures with clouds and skies, but which also works well for fall foliage.

I created this image in Photoshop to approximate the visual spectrum, and use it to create different custom black and white conversion filters. You can download it here from my public BWMastery Toolbox Dropbox folder, then import it into your Capture One catalog and use it to see the affect of different black and white filtration settings.

More Control with Color Balance with Black and White

Changing the color balance does work and allows you to change different parts or the scale differently. When combined with color “filters” for black and white conversion, it can add an additional layer of control and impact to the image. 

Even More Control with the Color Editor

The Color Editor tool provides an even deeper level of control in that you can select the range of color and shift it around. When the black and white check is enabled, it will allow you to see how decreasing the saturation or luminosity of specific colors can help smooth any harsh transitions or temper/increase the filtration of that range of colors.

Sign up for a One-on-One Skype session to see this workflow in action

I offer one-on-one Skype training sessions on a limited basis to personally show you my workflow using your images. During the session you are able to see my display as I work through your image in real time as I explain each step in complete detail. You are able to interject questions at any time or ask to see the steps as many times as needed. I also provide a video recording of the session for you to refer back to at your leisure.

Each session is long enough to work quickly with a technique on a few different images, or work to completion with a single image. I also offer a 3-session block that is designed around my Intuitive Localized Contrast Control method, and preparing for print with my personal sharpening and printing workflow, although since each session is personalized to your needs it can be on any topic you choose. 

One-On-One Instruction: 3 Session Block
195.00
Quantity:
Sign Up Now

Black and White with Capture 1 Pro 8 - Part 1: Custom Workspaces

In my recent Phase One webinar on Black and White with Capture One Pro I first talked about setting up a workspace that allows for frictionless workflow, and moving away any unnecessary tools or tabs.

I essentially keep all the work done in three tool tabs, Exposure, Black and White, and Local Adjustment, and try to minimize the back and forth to each of the different tool tabs.

Here’s a short explanation and a few screenshots of the different tabs, and how I have each of the different tools organized. One huge benefit of C1 is the way you can have the same tool in multiple tabs. It might seem redundant at first, but you'll find that you will need to access the same tool at different points in the workflow, and not needing to go to a different tab, make an adjustment and then and back is a huge savings in terms of time and attention.

Exposure Tool Tab

  • Base Characteristics
  • Exposure
  • High Dynamic Range
  • Clarity
  • Levels
  • Curves

Black and White Tool Tab

  • White Balance
  • Black and White
  • Exposure
  • Color Balance
  • Color Editor
  • Curves
  • Film Grain

Local Adjustments Tool Tab

  • Exposure
  • High Dynamic Range
  • Clarity
  • Sharpening
  • Noise Reduction
  • Moire
  • Purple Fringing

Three Secondary Tool Tabs

Sharpening and Noise Reduction
Composition (or Crop)
Process Recipes and Output

Three Tertiary Tool Tabs

Library
Info
Color

ILCC Part 3: Outflanking the Digital Print

Working with Curves Adjustment Layers and Masks

This is the last post in a three-part series about what I first half-jokingly called my Intuitive Localized Contrast Control technique, or ILCC for short (here are links to Part 1 and Part 2). Everyone has their fancy way of saying "changing contrast"; George deWolfe calls it Luminosity and you need his Perceptool plugins ($90) to do it right, and Joel Tjintjelaar has his iSGM2.0 method (which is just a layer mask with a gradient inside a selection). We can come up with fancy names for it all day long, but really all you are doing is changing tones in specific areas. I call it "look at the image to adjust the tonalities, and then paint in the adjustment until it looks right."

All joking aside, this ILCC technique is based on what I wrote back in this post, It Only Looks Steep When You're Standing at the Bottom. All we are ever doing is changing the relationship of one tone to the another—we're changing contrast. There are a ton of ways to arrive at the final result, and some are easier than others. I prefer to do it with curves adjustment layers, their layer masks, and a brush tool. Before going into exactly how I do that, let's first look at how selections and adjustments are commonly taught.

Most people learn to work with adjustment layers, masks, and selections completely opposite of how I work and prefer to teach it.

In most cases people will follow steps in this order:

  1. Make a selection, either a very simply one with the quick select, lasso, or use the quick mask mode and the paint brush to define the selection.
  2. Then, create a new adjustment layer—this could be curves, levels, color balance, etc. Doing so automatically uses the currently defined selection to set the transparency of the adjustment layer mask, making it perfectly clear and applying 100% of the adjustment.
  3. Edit the adjustment layer in the properties panel until the desired effect is achieved. And then, maybe, refine the edge if there are harsh or obvious tonal transitions or editing artifacts.
  4. If the adjustment is too much or too little, either alter the adjustment, or decrease the opacity of the layer.

This might work adequately for certain types of quick edits, but I usually advise people to steer clear of working this way. When you make a selection and new adjustment layer in this order, 100% of the adjustment comes through the white area of the mask, and there is a very sharp edge between areas with 100% adjustment and 0% adjustment. Yes, the selection could be feathered before or after the mask is created, but if the selection is not made very well, the picture can start to have a collaged look, where tonal edits are pasted directly up against other tonal patches. It might not look realistic, or it might have an unbelievable, overly dramatic, over-processed look. In some of the worst cases I've seen, it looks like the mask was made with preschool safety scissors.

Here is a short workflow demonstrating this usual method.

At first glance it seems fine, but I find greater creative control and the ability to balance the tonal structure of the image is easier when working in the following manner.

My ILCC Approach

I prefer to work with adjustment layers, masks, and selections completely differently from how it is usually taught, and how the majority of users seem to work. My process takes some of the same techniques learned in the analogue darkroom and integrates them into my digital workflow. The idea is based around something Michael A. Smith wrote about in his approach in his original artical, On Printing.

He talks about making two test prints of the full image, one that is too light and the other too dark—not test strips of just a small part of the image. His point is that when you make a full-page test print in the darkroom, you will arrive at your initial exposure faster, and at the same time see how all the tones in the image change in relation to the increase and decrease in exposure—across the entire print—which will give you a reference for making more intuitive burning and dodging decisions.

I have been using modified versions of that technique in the darkroom since 2001, and at some point over the years developed the following personal digital workflow based on those principles. I'm not claiming that I discovered this, it is just the sequence I prefer over making local adjustments in RAW editors like Adobe Camera Raw or Lightroom. The major failing of making local adjustments in Camera Raw, Lightroom, and Capture 1 Pro (to a lesser degree) is that those programs force you to paint in your adjustment before you can see the effect of your adjustment—you never really know what you are going to get before hand. The other major limitation of doing these kinds of tonal edits in these programs is the limitation of refining the adjustment to a detailed selection. Yes, you can go back and alter the adjustment layer, but those programs lack the ability to make the kind of intuitive editing decisions that you can easily do in Photoshop with curves and an adjustment layer by following these steps:

  1. With nothing selected (cmd+d/crtl+d to deselect), and with the top-most layer active (highlighted in the layers panel), create a new Curves Adjustment Layer. This will load directly on top of the previously active layer with a white (transparent) layer mask (you want the mask to be transparent so you can see the overall effect of your adjustment).
  2. Activate the scrubby slider icon in the curves adjustment layer to work directly with the tones in the image instead of inside the curve's property panel. The idea is that you want to watch the image and how the overall tonal structure changes as you change the curves point and not look back and forth from the properties panel and the image.
  3. Locate the tone in the image you want to edit—the one you want to make lighter or darker—and click without letting up and simply drag up to make it lighter, or down to make it darker. Working this way will allow you to see how all the tones are changing based on that one control point. When you reach a point that you think is acceptable, let up from the mouse or pen tablet to accept the edit. I tend to push it to the point where it is almost too much of an adjustment, and then control the degree of the effect in steps 6-7.
  4. Use the same technique in step 3 to set and adjust another control point on the curve, either with the scrubby icon or directly on the curve itself. I recommend only using 2-3 widely spaced control points to prevent any tonal reversals, or overly flat/horizontal sections of the curve.
  5. Once the image/area is looking about right, activate the layer mask by clicking in the layer mask icon and pressing command/ctrl+i on the keyboard. This will invert the layer mask, completely hiding your adjustment.
  6. The Brush Tool Now it’s time to paint with white. Press d (for thw default foreground/background colors) and then x (to exchange the foreground/background colors) on the keyboard. This will set the foreground color to white and the background color to black. Now choose an appropriately sized brush for the size area you want to adjust: Use the keyboard shortcut b and use a soft edge (if you right click with the the brush tool active you can change the size and hardness—use a setting of 0), and low opacity and flow settings (between 20-40). You can now paint in the desired effect gradually,watching how the adjustment affects the structural balance of the rest of the picture, and noticing if there are additional adjustments that need to be made in other parts of the image.
  7. If the area to be edited has hard and definite edges, like the wall of a building, rocks and mountains against a sky, or portrait against a background, and if painting in the adjustment cannot spill over into the adjacent areas, then this it’s advisable to create a selection that will confine any painting within the mask to just that defined selection. The fastest way to do this is with the quick select tool (w) and, while still using a soft brush and low opacity and fill settings (or creating a gradient), simply paint within the selection without worrying about spilling over into other areas in the image. If done carefully, you can feather with the soft edge of the brush right up to the edge of the selection to "hide" your edits even further.
  8. One thing other editing applications like Lightroom can't do is show you the effect of the adjustment without the mask applied. To do this in Photoshop, hit shift+click in the layer mask icon (or right click the layer mask icon and choose disable layer mask). That will put a red X through the icon and show how the adjustment is affecting the underlying image without the mask. You could then shift+click back on the icon to re-enable the mask, or right click the icon and choose “enable layer mask.” If you did not make any other adjustments, you can simply cmnd/ctrl+z to toggle the mask on and off without loading up your history states. I do this back and forth quickly to see how different parts of the image are being affected to see if any other adjustments are needed, or if I can use that adjustment in a different part of the image.
  9. This technique can be used in conjunction with luminosity or channel-based selections for painting in the adjustment based on the luminance values in the image. There are a few photoshop plugins and actions that will make these kinds of selections for you, but they are just as easy to make and apply yourself.

I do this as much as necessary, and might have 10 or more curves adjustment layers that deal with different parts of the tonal scale or parts of the image. Once you get used to working this way, you’ll find that all of this happens very quickly, and you’ll develop a muscle memory for using the D key to select default foreground and background colors, the X key to exchange them and the number keys for changing the opacity, and the ctrl+right click to change brush sizes. The other main benefit is that you can go back and paint in a different area in a previously made adjustment layer (like in step 8 above).

In an upcoming post I will demonstrate how to quickly and easily create temporary luminosity selections for use when painting in the adjustments with with this layer maksing method.

About this Photograph

This was made recently on a trip to southern New Mexico, half-way between Silver City and the Middle of Nowhere. It was one of those quick, pull over and grab the tripod and camera and run into the desert and hope not to get hit by lightning moments.

I am using this image for an upcoming Dedicated Black and White Workflow Webinar with Capture 1 Pro 8, and will discuss how the techniques discussed in this series of posts can be translated to working with Capture 1 Pro 8 Adjustment Layers.

Stay Tuned.

August 2015 Exhibition Announcement

There is a yearly group photography exhibition in Philadelphia that I have been part of for the past five years with the Light Room, an artist-run non-profit photographic arts center in Philadelphia.

In the past I have used this to show new work made during that year, with work ranging from ink jet prints of found objects, to platinum-palladium prints on hand-made Japanese paper, to gold leaf backed transparencies of images of the Sun. These past shows have been organized around a specific body of work, but this year I decided to take a much different approach and looked back to re-examine photographs of the landscape of the American West I have made over the past fifteen years.

Richard Boutwell-Owens Valley, 2007

This exhibition allowed me to create a show representative of the techniques I have been writing about on this site, and have been teaching in one-on-one lessons for the last several months—ranging from different scanning techniques of large and medium-format negatives, different raw conversion techniques for digital capture, as well as custom ink mixing with Piezography and profiling with QuadToneRIP.

There will seven other photographers, with a wide range of work and processes, including: hand-colored salt prints, platinum/palladium prints, gelatin-silver prints, and color inkjet prints on paper and metal.

There is a big opening this Friday, August 7th, from 5-9PM and a more relaxed artists’ reception on Sunday, August 9th, from 2-5PM.

If you are in the area and are able to attend one of those dates I would love to see you.

White Sands National Monument, 2015

Exhibition Details

The Light Room, one of Philadelphia’s select photography organizations, will return for their fifth in an annual series of summer group exhibitions. Eight photographers will present their latest work over a range of media and subject matter. The Light Room, founded in 2000, is an artist-run non-profit photographic arts center in Philadelphia. Our goal is to foster an environment in which members can achieve their photographic objectives through the use of modern, professional darkroom and studio facilities, ongoing education, and the sharing of ideas.

Exhibiting Artists

Mary Ann Broderick-Pakenham, Richard Boutwell, G. A. Carafelli, Carlos Chan, Ronald Corbin, Annarita Gentile, Josh Marowitz, and Al Wachlin, Jr.

First Friday: August 7, 2015 5 - 9PM

Artist Reception: Sunday, August 9, 2015 2 - 5PM

Closing: Sunday, August 30, 2015 2 - 5PM

Address: 45 N. 2nd St. Philadelphia, PA 19106 Phone: (215) 625-0993

About the Photographs

All photographs in the show (and seen above) are carbon pigment prints on Canson Edition Etching made with a custom six-shade ink set made with a blend of Piezography Carbon and Selenium inks. This mix of inks on this particular paper creates a wonderful color that somewhat resembles great modernist vintage gelatin silver prints made between 1930-1950.

If you've ever wondered what these things look like in the flesh then this is a great opportunity to get up close and personal (and enjoy some gallery wine while you're at it).

In Search of the Perfect QTR Profile Part 3: QTR Correction Curve Tool

I don't mean for this to become a site dedicated to printing with QuadToneRIP, and I have a few new exciting things waiting in the wings that will be coming out in the next few months. In the mean time, here is a little tool I put together for automatically creating a QTR correction curve based on any 21-step measurement file, and outputs a set of input and output points that can be pasted into the QTR ink descriptor file. This takes the place of the option to embedded a Photoshop .acv curve in the gray_curve= line and eliminates the problem of clobbering your profile if you move or edit the .acv file.

Anyone who has made their own QTR profiles has probably incountered the annoying "Lab values not in order" error when trying to linearize their profile. While this tool might not solve that problem completely, it should produe a profile that will print with a fairly straight-line density increase that should get you through the final linearization steps without any additional problems.

I have not tested this for creating a correction curve for printing inkjet negatives for alternative processes, but it should would for that as well—at least in theory...

The instructions and screenshots below show the steps for aMac, but the process is nearly identical for the Windows QTRgui (or when working with the ink descriptor file in a plain text editor on Windows).

Step-by-Step Instructions

  1. Print and measure the standard 21-step target with the base raw profile (a profile without any inputs in the gray_curve= or linearize= lines).
  2. Run the measurement file through the QTR-Linearize-Data applet to parse the Lab_L data into a nice neat column.
  3. Select all and copy everything—text graph and all—to your clipboard (cmd/ctrl+a then cmd/ctrl+c)
  4. Create a new blank Excel workbook and paste the text file data into it.
  5. Select ONLY the cells with Lab_L Values (all 21 of them)
  6. Open the BWMASTERY-21-step-QTR-Correction-Curve-Tool Excel template found below and paste the Lab\L measurements into cells E10 through E30 (you can simply select cell E10 and press cmd/ctrl+v to auto fill the rest of the cells)
  7. Select the highlighted cell E32 and copy and paste it to the Gray_Curve= Line in the ink descriptor file.
  8. Save the profile with a unique name and then install the profile like normal to create a new set of overlapping QTR curves using this new correction curve.

The resulting profile should print nearly linear and can be fine tuned with the standard linearization process.

The next few screenshots are of the ink graphs from a custom six-shade carbon/selenium blend I made for an upcoming show in August. I intentionally created the raw profile to print much darker and blocked up than I would have normally created it to demonsrate how close the correction curve can get to a QTR linearized profile. There were no reverals in the initial curve so the standard linearization would have worked. Similar to the new Linearize-Quad app Roy Harrington recently released, this tool effectively allows for a two-step linearization process. It might not be right for every situation, but is good to have in the tool box so you can get through profiling and get to printing faster.

BWMASTERY 21-step Correction Curve Tool Downloads


CORRECTION CURVE RESULTS

I did a series of controlled tests this morning comparing measurements made from profiles using the standard QTR linearization method to those using the correction curve tool I created. I tested 4 variations of a new custom 6- ink profile using a mixture of Cone Carbon mixed with Cone Selenium shades 2-6 and STS Matte Black as a Shade 1. The same 21x4 measurement file was used to create a QTR linearization and Correction Curve for each of the different variations of the profile to ensure that a errors in the readings were not the cause of any irregularities between the two.

In Search of the Perfect QTR Profile Part 2: Relinearize Quad Curves

In the post "How to Edit .Quad Files without Burning Down the House" I said that re-linearizing the QTR .quad files used in printing was not possible and you shouldn't try it at home. I then set out to prove myself wrong.

Lab_L measurements from an Eboni 6 profile made with the QTR curves creation tools.

At first this started off as just trying to map the quad values made with the QTR curve creation workflow to a different shape—take the straight line and put a slight curve in the shadow end of it. It seemed like it would be easier to push around linearized quad values, but when I got into it deeper I found it really didn’t matter if they were linear to begin with or not.

Piezography Curves, and the occasional need to re-linearize them

There is no arguing that what you are getting with Piezography is a turnkey product that is much better than (or at least equal to, with a whole lot of work) what anyone could do themselves with the same printers and inks. The only problem is that when it doesn’t work there is really no way to know what exactly is wrong without flushing ink or buying new equipment. Then there are the inevitable support emails, desperate forum postings, and then maybe a printer ends up in the bin and another user opts for HP or Canon the next time around.

Lab_L Measurements from an existing K6 profile for the Epson 1430 that indicates a problem with the printer.

I have had three different printers and three different ink sets that have not been able to make a linear print—they either printed much darker than intended or made prints with distinct bands at different parts of the gray scale (and this is something I have tested every possible way). The one with banding was a brand new Epson 1430, and the others were an old 3800, which is still going strong, and a 4800 that died a few years ago.

At first I thought my problems were just a matter of user error, so I double checked my methods with a different ink set (with the same shade inks) and had the same problem. Buying a new printer was not an option, so it was a matter of sorting out how to get the thing to work for me. Doing all that testing and refining the QTR curve creation method started to inform parts of the upcoming QuadToneRIP book, but it also lead me to “hacking” the quad file and really understanding how some of the things under the hood work. I’m glad I had a couple of wonky printers because otherwise there would be a whole lot I wouldn’t know now.

I have spoken to other people who have absolutely no problems with their printers loaded with Cone inks, but there are others who have similar problems to what I am describing. From what I have seen and experienced, a printer that is only slightly out of spec can cause problems with the Piezography curves. That isn’t meant to diminish Jon’s work or his product—I still think it is the best out there, but that the product needs a printer that is in perfect working order or else the shape of the curves can introduce problems. The Piezography style curves might even be less prone to error than QTR style curves, but the QTR problems can be solved by the user.

About my method for re-linearizing the Quad Curves

I have previously written about having a way of somewhat automatically filling in the ink limit and overlaps for partitioning k6/k7 inks using the standard QTR Curve Creation tools and workflow (as well as some additional gray gamma gray overlap settings). This is information that can be found and pieced together from various sources online. I just did it in a more logical, repeatable, and refined manner, and this is what will make up some of the upcoming book. The original Quad curves created with this workflow produce a smooth, linear density distribution, but with my Quad Curve relinearization method I can introduce a slight compression in the shadows and a different shaped gray curve the new quad curves are mapped to, but that does not block up like using the color managed workflow for “re-linearizing”. This is done by reading in the original printed densities, defining target densities with either linear or slightly compressed shadows, automatically creating a correction curve, and then mapping the original quad values to corrected ones that will print the target densities. All of this is done outside of the QTR Curve Creation tools so the "lab values out of order" or "not constantly increasing" linearization errors are never a problem. 

Since the my method works directly from an existing .quad file it means that I can also relinearize existing Piezography curves for new papers, aging or out of spec printers, or to map existing profiles to a different shaped gray curve. The new linearized quad values are then pasted into a text file template with the correct header information for the printer and number of inks being used and is then saved with the .quad extension. That new quad file can then be placed in the correct folder for the quad printer (depending on the mac/pc requirements) and installed like any other .quad profile.

It is also possible for me to create custom Quad Profiles from the existing Piezography master curves (although the higher limit of ~60% in the K channel is a problem and can cause reversals from 96%-100%.) I have solved this issue by developing a way to parse Piezography master curves to 21 control points so ACV curves can be created and edited in Photoshop and then assigned to each channel in a QTR ink descriptor file (doing this with the ink_curve= input bypasses the standard QTR gray ink partitioning functions and creates new master quad curves based on each of the ACV curves). You can then control the amount of ink for each channel with the ink limit settings in the ink descriptor file. The 51-step grayscale is printed, measured, and then the original quad file and measurement file are imported and automatically linearized to the ideal densities. 

My Quad File Relinearization Service

I have this relinearization method more or less perfected with a spreadsheet template that I am using as a prototype and now I am working on developing it into a possible standalone application. Until that is ready, I am offering a re-linearization service for people who can email or upload the 51x3 measurement file and the .quad file used to print the 51 step target (the single 51-step target is measured three times to average out measurement errors). You will receive a graph of the original measurements and a re-linearized quad file.

How is this different than what INk Jet Mall offers with custom Piezography Profiles?

I've chosen to offer this service as an affordable alternative for people who can print and measure their own 51-step target, and is not limited to Piezography curves. Since this works directly with existing .quad files I can re-linearize any QTR Curves based on the 51-step measurement file. This can work with UC K3 profiles, as well as Eboni-6 profiles made with the QTR curve creation tools**. 

I am not claiming that profiles created with my method are as good as what you would receive with a custom profile from Ink Jet Mall. My method doesn't require printing and mailing a full 256 step grayscale file that their PiezoProfiler requires, and while my correction curve and quad value interpolation method is very good and extensively tested, it does not work with the same kind of direct lookup table for determining the final quad values. This is meant as a way to affordably test new papers, correct for printer problems, or just map the existing linear profile to one with a slightly richer shadow curve that doesn't create the blocked up shadows that occur when printing with an ICC profile and Black Point Compensation. 

Why a 51x3 measurement file? 

I've tested this with the 21x4 random target included with QuadToneRIP and found that 21 data points is not enough to correct for the small bumps that often occur when using so many overlapping inks. I've determined that 51 control points allows enough precision for the functions to work accurately while not being too arduous to print and measure. However, there are always small errors in measurements of the same patch and/or target and I've found that averaging three different measurements of the same printed target is sufficient to smooth out any measurement errors. I've found that it is also better to measure the same target three times than to print three targets and measure each target once.

You can download the 51x3 reference file for X-Rite's ProfileMaker5 MeasureTool and measure the 51-step target included with QuadToneRIP. For those with Mac OSX Lion and later, I've created an i1 Profiler workflow and updated 51-step target that works with the i1Pro photospectrometer. You can download these at the link below. Those with the Spyder Print photospectrometer can print the current 51-step target and use the reference file included with QTR to measure and save three separate measurement files.

SpectroWhat?

Alternatively, for those without a way to measure the printed target, for an small additional charge you can upload your quad curves and then mail in the target for me to measure and then receive back a relinearized set of QTR curves. The page to upload you quad curves and instructions for mailing the target can be found here: http://www.bwmastery.com/quadlin-mail-target

INSTRUCTIONS

SPYDERPRINT OR X-RITE PROFILEMAKER5 MEASURETOOL

  1. Print the Step-51-gray.tif found in the EyeOne folder in the QuadToneRIP application folder using the QTR Curves you want to relinearize.
  2. Dry the print overnight or with a hairdryer for 3-5 minutes
  3. Download the special reference file to measure the QTR-51-step grayscale three times. 
  4. Measure the 51-step target three times and save the measurement file with the same name as the quad curve and include 51x3 at the end of the file name before the .txt extension.
  5. Upload the Original Quad Curve File and the Measurement File of the 51-step target using the form below.

I1PROFILER

  1. Download and extract the i1Profiler 51x3 workflow folder.
  2. Follow the instructions here for printing and measuring the updated 51-step target for i1 Profiler. 
  3. Upload the Original Quad Curve File and the Measurement File of the 51-step target using the form below.

CAN'T FIND YOUR .QUAD FILES?

These QuadToneRIP curves are buried in the system folders and can be found by navigating to the file path below:

  • Mac: Macintosh HD/Library/Printer/QTR/quadtone/
  • Windows: c:\Program Files\QuadToneRIP\QuadTone
  • Then choose the folder for you printer model and ink set
  • Then you can drag the quad file you want to relinearize to the field in the upload form below

** I have not yet tested this with digital negatives for alternative processes, so for now this service is limited to positive inkjet prints, but I hope to be able to do the required testing for digital negatives within the next few months.

A New i1 Profiler Workflow for the QuadToneRIP 51-step Grayscale Target

Most people use a 21 step (5% step) target for measuring and linearizing their QuadToneRIP profiles. Using the 21x4-random target is a step better, and I described this process in my post last year with instructions for using i1 Profiler to measure the linearization targets. While the 21-step target does a good-enough job with a 3 partition profile, sometimes a 51-step (2% K) target can show you where little bumps might be hiding between those 5% steps.

Problems with Smaller Slices

Measure a single patch a few times and you will see that it is usually off each time by a small percentage. Smaller steps will show more information, but can also show bumps in the curve where they might not actually exist. Those measurement errors need to be averaged out to determine if the problem is the actual ink distribution or inconsistencies in how the light reflects off the paper surface as the measuring device moves over each patch. After measuring thousands of patches over the last few years, I have settled on measuring each 51-step target 3 times. A 51x4 might be more accurate, but 9 passes seems to be about the limit of my patience… To make these averaging measurements easier, I have updated the 51-step 2% target included with QuadToneRIP so that it will work with i1 Profiler and the i1 Pro photospectrometer. The workflow allows you to measure the same target three times (measure row 1-3 once, and then go back and measure row 1-3 again when the on-screen instructions indicate measuring row 4-6, and measure it again when the instructions call for rows 7-9).

Detailed Instructions:

i1 Profiler

These instructions and screenshots were created on a Mac, but they are applicable to i1 Profiler for Windows. You will need to download the i1 Profiler from this Dropbox Link.

Print the 51-step target that is included in the downloaded zipfile from the QTRgui on windows or from PrintTool on Mac OSX and make sure to disable color managment. The larger format target that makes reading in strip mode requires you to print it in portrait orientation.

I suggest you create a folder on your desktop for the workflow and any saved measurement files. The default directory or folder for these files is buried in the Application Support folders on the Mac and in the Application Data (which is a hidden folder that can be revealed in the Tools>Folder Options Menu) in the Documents and Settings folder on Windows.

Mac:

Macintosh HD/Library/Application Support/X-Rite/i1Profiler/ColorSpaceRGB/PrinterProfileWorkflows

Windows:

C:\Documents and Settings\All Users\Application Data\X-Rite\i1Profiler\ColorSpaceRGB\PrinterProfileWorkflows

Drag workflow to this folder on Windows

Launch the i1 Profiler Application and make sure you are in Advanced user mode. - Click either RGB or CMYK Printer - Click Profiling (this just gives you access to the next screen where you are able to load the saved workflow) - Alternatively, you can simply click "go to saved workflows" in the lower left portion of the screen.

  • If you placed the downloaded i1 Profiler workflow in the saved workflow folder, click the name of the workflow in the sidebar under saved workflows. Load Workflow from the bottom right corner of the window.
  • This will open the default directory for i1 RGB profiling workflows. Navigate to the downloaded workflow and open.
  • It will automatically load the 51x3 reference file, and take you to the chart measurement step.

To measure the chart three times:

  • Follow the on-screen instructions for measuring rows 1-3
  • When it instructs you to measure rows 4-6, remeasure rows 1-3 again
  • Repeat this when instructed to measure rows 7-9
  • Save the measurement data under the “Page Data” label

Saving the data in the correct format is essential for the automatic averaging and graphing template to recognize the correct data fields. This is also the correct data format for uploading your measurement file for my QTR Quad file relinearization service.

Create a relevant file name for the paper or the QTR profile/curves used to print the target. I generally suggest using the same name as the QTR or Piezography Profile being measured.

  • Choose i1 Profiler CGATS Custom (*.txt) from the drop down menu and navigate to the folder you will use for your measurement files, and click Save.
  • This will open a window to choose the custom CGATS file options.
  • You will need to check these four fields:
    1. SampleID
    2. SampleName
    3. XYZ
    4. Lab

See the screenshot that illustrates the data fields to check.

When you click ok to accept it will save the file name you chose followed by “M0” ex: PaperName-ProfileNameM0 Open the file in a text editor to make sure it looks like the illustration below.

Graphing the Measurement Data

What good is measurement data if you can’t view it? Along with the workflow, I have added an Excel spreadsheet template that does an automatic lookup of the luminosity measurements and then averages and graphs them. The second sheet uses a different lookup to calculate Density from the recorded XYZ_Y measurements.

  • Note: There are different template files for i1 Profiler and for the older ProfileMaker5 MeasureTool.
  • The template will only work with measurement files saved as detailed in the instructions and screenshots. Incorrectly formatted measurement files will result in incorrect lookups and wonky graphs.

Detail Enhancements Using Unsharp Mask Filters and Layer Mask Techniques

During a recent private Photoshop lesson with a student, we were attempting to address a problem with what what me be seen as “flare” around highlight areas that bleed into the mid-tones, giving it a mushy or muddy feeling. I introduced him to a type of detail enhancement that is often used for color images, but that I've adapted for increasing the detail of flatbed scans of prints being used in two differnt book projects I’m currently working on.

I am using the same image from the previous post comparing Capture 1 Pro 8 and Adobe Camera Raw to illustrate these steps. Fine details, like those in this picture, run the risk of quickly degrading if not carefully sharpened and masked, and the techniques shown here can be used to temper the sharpening or fine detail enhancements without blowing out detail, which can happen with other sharpening methods. 

As a bonus I am making this photograph available as a part of a special discounted print offer on images used to illustrate select posts on this site. Read more about this offer at the end of this post or click here now

Original Capture 1 Export. The mid-tones have a slightly blurred or flared look around the highlight areas. This first part of this workflows will remove the flare, and the second part will sharpen the mid-tone separation and fine detail.

You can see the problem in this first screen shot. The areas around the highlights look like they are flared so we can deal with that, and at the same time, increase some of the fine detail separation through out the who image.

PART 1 - Reduce Halos and Increase Subtle Mid-tone Separation

The following series of steps are illustrated in the gallery of screenshots below.

  1. If you are working from digital capture you can just duplicate the background layer two times, and name the top most copy “detail” and the lower copy “reduce halo”.
  2. Turn off the visibility of the “detail” layer and make “reduce halo” the active layer
  3. Select Unsharp Mask from the Filter>Sharpen menu.

    • Use an Amount of about 20-30, a Radius of 30, and a Threshold of 0 and then apply the filter.
    • It will look over sharpened all over, but we are going to use a blend mode layer mask to selectively darken the lighter midtowns.
  4. Change the blend mode of the reduce halo layer to Darken. This alone will improve the image, but might make the shadow tones a little too dark.
  5. To localize the "reduce halo" layer to the upper mid-tones create a new layer mask, and fill with black by alt/option clicking the layer mask icon in the layers panel.
  6. With the layer mask active, navigate to Apply Image in the Image menu.

    • Select background as the source and set the blend mode to Normal, and then click OK to accept. This will create a luminosity mask and make the "reduce halo" effect proportional to the tones in the image.

    • This effect can be further controlled in the Masks Panel by setting the feathering to a value between 2-5 pixels.

    • The range of tones affected can be controlled by applying a curves adjustment to the mask.

      • To do this, make the mask active and press option/alt while clicking in the mask icon, and then hit cmnd/cntrl+m to bring up the curves adjustment, and darken the shadow portion of the curve and lighten the highlights. It is important to note that once the curve is applied, it cannot be readjusted.

      • You can lower the density and readjust the feathering of the mask in the Masks Panel or lower the opacity of the layer to achieve the desired effect.

After making these adjustments to the mask, the original sharpening effect becomes very subtle, but helps enhance the mid-tone contrast so you are not increasing the appearance of any halos when making other lightening adjustments later in the editing process.

Part 2 - Enhance Mid-tone Detail

This short workflow and following series of steps is very similar to the "reduce halo" steps, but targets a different range of tones and uses slightly different settings. The screenshots at the end illustrate the different settings and what you masks and results should look like 

  1. Make the “Detail” layer active, and select unsharp mask from the Filer>Sharpen menu.

    • The Amount and Radius should both have a value around 30Threshold should be 0.
    • The highlights will be very over sharpened, but like in the previous workflow, it will be controlled with a luminosity mask.
  2. Create a new opaque layer mask (fill with black) by alt/option while clicking the Create Layer Mask Icon in the Layers Panel.
  3. With the layer mask active, navigate to Apply Image in the Image menu.

    • Select Background as the Source, set the Blend Mode to Normal, check the Invert box, and click OK. This will create a luminosity mask resulting in the sharpening effect being inversely proportional to the tone in the image.
    • This can be controlled in the Masks Panel like any other mask by feathering the mask with a setting between 2-5 pixels.
    • It can also be controlled by applying a curves adjustment to the mask.

      • To do this, make the mask active and press option/alt while clicking in the mask icon and then hit cmnd/cntrl+m to bring up the curves adjustment and darken the shadow portion of the curve and lighten the highlights. This is like applying an adjustment to an Alpha Channel and it is important to note that once the curve is applied, it is a destructive edit to the mask and cannot be reversed or undone further in the process.

      • You can lower the density and readjust the feathering of the mask in the Masks Panel.
  4. Now go back to the layer opacity and change it to a value between 50%-70%, which should be sufficient in boosting the mid-tone separation and fine detail, without overdoing the effect.

  5. You can now select both of these detail enhancement layers and create a new layer group to keep the Layers Panel more organized and less cluttered. The added benefit of grouping the detail enhancement layers is that they can be further controlled through the group opacity setting.

With these adjustments made, you can proceed with any burning and dodging adjustments, with curves adjustment layers placed above the detail enhancement layers. With the added detail and separation created in these previous steps, additional contrast (if desired) will require less dramatic curves adjustments.

Washington Oaks, Florida, 2013 - Carbon Pigment Print
from 25.00

Specially priced example print made with the techniques detailed in specific blog posts

Available in two editions

— 5.5"x9" signed and un-numbered print on 8.5"x11" paper
—12"x20"  limited to an edition of 15 signed and numbered prints

Printed on Museo Portfolio Rag with a custom six gray ink blend of Jon Cone Piezography Carbon inks and STS Matte Black for additional density using my unique QuadToneRip profiling process. 

Presentation:
Quantity:
Add To Cart

Capture 1 Pro 8 vs Adobe Camera Raw with the Leica M Monochrome

I started to write about comparing Lightroom and Photoshop last year, but that turned into this post. I have been using Capture 1 Pro 7, and now version 8, as my default raw editor for exporting tiffs to Photoshop for final editing and output for the past year+ and haven't looked back. Until now... Adobe recently updated the Adobe Camera Raw processing engine and it is now included with the new Lightroom CC, and I was asked about testing how the Leica M Monochrome would benefit from Capture 1 compared to Lightroom CC or Adobe Camera Raw 9. Honestly, the differences between C1 and the new ACR are still astounding. 

Adobe Camera Raw on left and Capture 1 Pro 8 on right at 100% pixel view

The above screenshot may make this blatantly obvious, but I will point out some areas where some subtle details benefit from the Capture 1 engine. There is something about the rendering of fine details and midtone separation that is so much better in C1, and it takes much less sharpening and clarity enhancement to achieve a perfectly sharp and smooth image. I would even say that it is sometime TOO much sharpness, and there is very little final sharpening that needs to be done in Photoshop. 

Here are some other screen shots with the detail and sharpening settings in both Capture 1 and Adobe Camera Raw. My personal sharpening settings are generally lower than the Capture 1 defaults, with only a small amount of Clarity and Structure in the detail settings. To arrive at comparable looking image in ACR, the clarity is cranked up to 64, and the sharpening and detail is much higher than the default of 25. 

The benefits of Capture 1 are not all about sharpness and fine detail. The ability to select processing curve options allows you to create an initial tonal range that is tailored to the type of image and desired final outcome. Adobe Camera Raw has one default curve and all global tonal edits are done though the settings for exposure, contrast, highlights, etc. 

Washington Oaks, Florida 2013
Final edited version with warm-toned ink simulation 

About this Photograph

This was made during a private Digital Black and White Crash Course workshop near Daytona Beach, Florida with the student's Leica M Monochrome camera. The challenge was to see how much of the fine detail could be retained in the harsh lighting conditions, while still being able to make a coherent picture from the extremely dense and chaotic scene.

Be sure to read the next post, also working with the same image, to learn how to further enhance the detail from your photographs with some additional Photoshop Unsharp Mask techniques. Read to the end for a new special offer on photographs from select blog posts. 

ILCC Part 2 — Introducing the Curves Adjustment Layer

This is a continuation of the introduction to what I have come to call my Intuitive Localized Contrast Control Method, which is essentially dodging and burning by working directly with the tones in the image rather than with the Burn and Dodge tools in the Photoshop toolbar. This post focuses on one of the most important editing tools in Photoshop—the Curves Adjustment Layer and its Layer Mask.

The curves adjustment layer is going to be the primary tool for creating your initial tonal adjustments, overall lightness/darkness, global contrast enhancements, dramatic burning and dodging effects, local subtle tonal changes, and final printing adjustments. The curves adjustment layer is the workhorse tool, and can do just about anything when it comes to tonal edits in your black and white images. There are other adjustment layers that can be use for altering brightness and contrast and input and output points, but nothing is as controllable as a curve adjustment. 


CURVES BASICS

It its most simple form, the curves adjustment layer allows you to define a point on the tonal scale and move it to any other point (within limits—there needs to be a minimum separation from one defined point to another). Photoshop then interpolates the tonal values of all the other points along the scale relative to the adjustment that was made. You can click on the line in the curves property panel to define a control point, or click the Scrubby Slider icon to select the tone to edit by simply clicking on it within your image and dragging that value up or down to make it lighter or darker. It is possible to have as many as 21 different control points per curves layer, but you will find that you rarely need more than 2-3 control points to create a wide range of edits.. One of the most common beginner mistakes is creating too many points with one adjustment layer, which can lead to harsh tonal transitions, reversals, posterization, or other editing artifacts. 

So why not just use the image>adjustment>curves (cmnd[ctrl]+m) rather than a curves adjustment layer?

The standard curves adjustment will permanently alter the image pixels of the image layer it is applied to. If that is done on the background layer, it will have been what is known as a destructive edit—an edit that cannot be modified separately from the image it is applied to. With an adjustment layer, the pixel values in the underlying image layers do not change, but are only affected by the adjustment layers above them. You can return to those adjustment layers at any point and they can be altered again and again until the layers are merged or until they are all flattened down to a single background layer.

So what is the best way to work with the curves adjustment layer?

Well, that depends on the situation... If you are making large global edits that affect the general lightness and darkness or overall contrast, create a new curves adjustment layer, choose a point on the section of the curve where you want to have the majority of your effect take place, and move the point of the curve up or down depending on the adjustment you want to make. Alternatively, and to work more intuitively, you can select the scrubby slider icon and then click (and hold) on the tone in the image that you want to affect and simply drag up to lighten or down to darken. If that isn't the tone you wanted to edit, you can undo or drag the control point off the curve and select an other tone. If you create two or more points that are too close together you will see that it can introduce some dramatic and unwanted effects, so I recommend 1-3 widely spaced control points for global contrast or lightness and darkness edits. 


In the next post I will go into detail about how this workflow differs from how people are usually taught to use adjustment layers and masks, and how you can work directly with the tones in the image without worrying about your image taking on an overly affected and unbelievable quality.

About This Photograph

From now on I will begin giving more details, technical and otherwise, about the photographs used in creating the example screenshots. In many cases they will have not been previously seen or published elsewhere here or on my personal website. In some cases, like this one, the it is still in the editing stage, and is meant to show how I use these techniques on a day to day basis. 

This particular photograph is part of my Lower Owens River Project in a part of Owens (dry) Lake where water is pumped in as part of the Dust Mitigation Project that I recently drum scanned to make a series of larger exhibition prints. The original 8x10 negative was made in 2009, and I there were some developing defects that prevented me from making a final gelatin-silver contact print.

The ability to control all the delicate midtones and highlights in the sky and reflection in the water while at the same time increasing the the dark shadow detail and midtone separation is something that could not have been done as easily in the darkroom. When combined with a smooth rag paper and the Peizography Carbon inks the richness of those tonal values are where images like this can really sing. 

In Search of the Perfect QTR Profile

I mentioned last year that I was working on a new user guide for creating custom QuadToneRip profiles based on the methods I've been using for the last few years. This simple PDF has taken on a life of its own, and the original idea of a short updated user guide has turned into a full length technical book, and possibly software to make what could be the perfect QTR profile. 

This all started more than a year ago when I was asked to write about digital black and white ink jet printing for someone else's book. That project fell through, but it sent me on a year-long journey testing and comparing prints with nearly every possible method.

I still think that the gold standard for most people is Jon Cone's Piezography method, but there are still people who (like me) are on a budget and want to make their own profiles—either to be able to test different papers, different custom ink mixtures, profile problematic equipment, or just because...

Difficulties of a Six-Ink Profile

The Gradient on the right is a profile with incorrectly set cross over points, and no additional overlap. The gradient on the left has correctly set cross over points and 60% overlap. 

One of the things that sets Piezography profiles apart from QTR profiles is the unique way Piezography partitions the inks and the shape each of the overlapping ink curves and their long trailing edge. QuadToneRip used a much different way of partitioning the grayscale, and, when using the standard ink limit/partitioning method, the shape of each ink's curve is, for the most part, out of your control*.

*There is a way to define a photoshop ACV curve for each ink, but you lose the ability to control the gray curve with the other settings in the ink descriptor file—that is for another post.


The default QTR curve building algorithm has some overlap as one shade passes to the next, and it works well for a K3 profile, but when more than 4 inks are coming in and out of use so rapidly, any mistake with the cross over settings can cause terrible banding and won't linearize when running the QTR profile installer. The trouble is that no matter how carefully you set the cross over points, the shape of the default curves is very "sharp" and can actually be seen as bumps or as horizontal banding in smooth gradients.

The Often Overlooked Gray_Overlap Setting

To minimize the appearance of those bands or sharp edges in the ink curves, I like a lot more of each ink to overlap to smooth the slope and and elongate overlap with the next darker ink(s). This creates a much more gradual ink distribution, and is similar to Jon Cone’s approach to building Piezography profiles, although the QTR curve creation program makes curves with much different shape. I use a GRAY_OVERLAP setting of at least 50, and have now pretty much settled on 75. The maximum value here is 100, but there seems to be little benefit of a setting that high. 

These screen shots illustrate the difference in how increasing the overlap can act to smooth the profile and fill in any noticeable dots as the inks are ramping up and down.

Overcoming the Increase in Density of the Overlap with the Gray_Gamma Setting

Since there is a lot more ink being laid down the increase in dot gain needs to be compensated for by increasing the gray gamma setting in the ink descriptor file. This is essentially a lightness and darkness control similar to the middle slider in the Levels adjustment in Photoshop. A number higher than "1" lightens the print, with a maxiumum value of 10. I usually just set this 1.x with the "x" after the decimal to whatever the overlap is.

Example: if the overlap is 60, then I set the gray gamma to 1.6 (for kozo or other papers where the density increases differently than normal inkjet papers this gamma setting might be 2.2-2.6 for the same overlap setting of 60). That will give the leading edge of each ink curve a much longer and more gradual slope than the default setting of 1.

The Quad Curves in the left hand window have a gray_overlap setting of 75 and gray_gamma setting of 1, are too much "to the left" will print much too dark. The Quad Curves in the right hand window have the same overlap, but a gamma setting of 1.8 

Gray Gamma set to 1 on the left will make a print that is far too dark. The profile on the right has a gamma setting of 1.8. This could have been set to 1.9-2.0 for an even straighter initial gray curve.

Gray Gamma set to 1 on the left will make a print that is far too dark. The profile on the right has a gamma setting of 1.8. This could have been set to 1.9-2.0 for an even straighter initial gray curve.

 

Near Linear Output Right Out of the Gate

This is an example of a profile I made quickly this afternoon with just two sheets of paper. The luminosity graph is prior to final linearization—just using the built in QTR curve creation program with my system for settings for the ink limits, cross overs and the gray curve. In my book I devote whole section to my approach to setting ink limits—how I determine the optimal density for each shade, and how best to divide up the tonal scale. And another section on how to set the exact cross over point (without using complex math—I've done that for you)

Is all this over kill? Maybe, but the goal is making beautiful prints, and not fighting for days (or weeks) to get a working profile. This is a case where setting things up right to begin with will go a long way to getting a great print as soon as possible. 

Is this approach with QuadToneRip as as effective as using the Piezography system? That is open for debate, and it depends on your goals, equipment, materials, and how comfortable you are getting messy with the inks and density measurements. I usually recommend that people just invest Piezography because it is well established and well supported system with a track record and an aesthetic you can trust. But, if you do want to get down into the weeds, I am writing the guide, and it should be ready for your summer vacation.