Add a boolean "direct" parameter to gimp_projection_flush_now(),
which specifies if the projection buffer should only be invalidated
(FALSE), or rendered directly (TRUE).
Pass TRUE when flushing the projection during painting, so that the
affected regions are rendered in a single step, instead of tile-by-
tile. We previously only invalidated the projection buffer, but
since we synchronously flush the display right after that, the
invalidated regions would still get rendered, albeit less
efficiently.
Likewise, pass TRUE when benchmarking the projection through the
debug action, and avoid flushing the display, to more accurately
measure the render time.
In gimp_drawable_edit_fill(), when filling/clearing the whole
drawable, without any special compositing (i.e., when there's no
selection, the opacity is 100%, and the layer mode is trivial),
fill/clear the drawable's buffer directly, without using an
applicator. This makes such operations much faster, especially in
big images.
... which is similar to gimp_fill_options_create_buffer(), however,
it fills an existing buffer, instead of creating a new buffer.
Implement gimp_fill_options_create_buffer() in terms of the new
function.
When creating a drawable undo from the drawable's buffer, align the
copied rectangle to the buffer's tile grid, so that all the copied
tiles are COWed, saving memory and gaining speed.
Add applied_x and applied_y fields to GimpDrawableUndo, specifying
the position at which to apply the applied_buffer, so that we apply
it in the right place, even if the undo rect has changed due to
alignment.
gimp-scratch is a fast memory allocator (on the order of magnitude
of alloca()), suitable for small (up to a few megabytes), short-
lived (usually, bound to the current stack-frame) allocations.
Unlike alloca(), gimp-scratch doesn't use the stack, and is
therefore safer, and will also serve bigger requests, by falling-
back to malloc().
The allocator itself is very simple: We keep a per-thread stack of
cached memory blocks (allocated using the normal allocator). When
serving an allocation request, we simply pop the top block off the
stack, and return it. If the block is too small, we replace it with
a big-enough block. When the block is freed, we push it back to
the top of the stack (note that even though each thread uses a
separate stack, blocks can be migrated between threads, i.e.,
allocated on one thread, and freed on another thread, although this
is not really an intended usage pattern.) The idea is that the
stacks will ultimately stabalize to contain blocks that can serve
all the encountered allocation patterns, without needing to reisze
any of the blocks; as a consequence, the amount of scratch memory
allocated at any given time should really be kept to a minimum.
... which is similar to gimp_async_add_callback(), taking an
additional GObject argument. The object is kept alive for the
duration of the callback, and the callback is automatically removed
when the object is destroyed (if it hasn't been already called).
This is analogous to g_signal_connect_object(), compared to
g_signal_connect().
In gimp_async_remove_callback(), if removing the last callback
while the callback idle-source is already pending, cancel the idle
source and unref the async object (the async is reffed when adding
the idle source.)
Use gimp_tile_handler_validate_validate(), added in the last
commit, in GimpProjection, in order to render the projection,
instead of separately invalidating the buffer, undoing the
invalidation, and then rendering the graph. This is more
efficient, and more idiomatic.
Add begin_validate() and end_validate() virtual functions, and
corresponding free functions, to GimpTileHandlerValidate. These
functions are called before/after validation happens, and should
perform any necessary steps to prepare for validation. The default
implementation suspends validation on tile access, so that the
assigned buffer may be accessed without causing validation.
Implement the new functions in GimpTileHandlerProjectable, by
calling gimp_projectable_begin_render() and
gimp_projectable_end_render(), respectively, instead of calling
these functions in the ::validate() implementation (which, in turn,
allows us to use the default ::validate() implementation.)
In GimpProjection, use the new functions in place of
gimp_projectable_{begin,end}_render().
In gimp_projection_finish_draw(), make sure we don't accidentally
re-start the chunk renderer idle source while running the remaining
iterations, in case the chunk height changes, and we need to reinit
the renderer state.
Don't needlessly flush projections whose buffer hasn't been
allocated yet. This can happen when opening an image, in which
case the image is flushed before its projection has a buffer.
The smart colorization was leaving irritating single pixels in between
colorized regions, after growing and combining. So let's just flood
these. We don't flood bigger regions (and in particular don't use
gimp_gegl_apply_flood()) on purpose, because there may be small yet
actual regions inside regions which we'd want in other colors. 1-pixel
regions is the extreme case where chances that one wanted it filled are
just higher.
The distance map has all the information we need already. Also we will
actually grow up to the max radius pixel (middle pixel of a stroke).
After discussing with Aryeom, we realized it was better to fill a stroke
fully (for cases of overflowing, I already added the "Maximum growing
size" property anyway).
When an error occurs, we want to prevent overwriting any previous
version of the file by incomplete contents. So run
g_output_stream_close() with a cancelled GCancellable to do so.
See also discussion in #2565.
When flooding the line art, we may overflood it in sample merge (which
would use color in the line art computation). And if having all colors
on the same layer, this would go over other colors (making the wrong
impression that the line art leaked).
This new option is mostly to keep some control over the mask growth.
Usually a few pixels is enough for most styles of drawing (though we
could technically allow for very wide strokes).
For this, I needed distmap of the closed version of the line art (after
splines and segments are created). This will result in invisible stroke
borders added when flooding in the end. These invisible borders will
have a thickness of 0.0, which means that flooding will stop at once
after these single pixels are filled, which makes it quick, and is
perfect since created splines and segments are 1-pixel thick anyway.
Only downside is having to run "gegl:distance-transform" a second time,
but this still stays fast.
We don't really need to flow every line art pixel and this new
implementation is simpler (because we don't actually need over-featured
watershedding), and a lot lot faster, making the line art bucket fill
now very reactive.
For this, I am keeping the computed distance map, as well as local
thickness map around to be used when flooding the line art pixels
(basically I try to flood half the stroke thickness).
Note that there are still some issues with this new implementation as it
doesn't properly flood yet created (i.e. invisible) splines and
segments, and in particular the ones between 2 colored sections. I am
going to fix this next.
This commit is based on GEGL master as I just made the auxiliary buffer
of gegl:watershed-transform optional for basic cases.
It doesn't necessarily makes the whole operation that much faster
according to my tests, but it makes the code simpler as creating this
priority map was quite unnecessary.
... and use in bucket-fill tool
Add gimp_pickable_contiguous_region_prepare_line_art_async(), which
computes a line-art asynchronously, and use it in the bucket-fill
tool, instead of having the tool create the async op.
This allows the async to keep running even after the pickable dies,
since we only need the pickable's buffer, and not the pickable
itself. Previously, we reffed the pickable for the duration of the
async, but we could still segfault when unreffing it, if the
pickable was a drawable, and its parent image had already died.
Furthermore, let the async work on a copy of the pickable's buffer,
rather than the pickable's buffer directly. This avoids some race
conditions when the pickable is the image (i.e., when "sample
merged" is active), since then we're using image projection's
buffer, which is generally unsafe to use in different threads
concurrently.
Also, s/! has_alpha/has_alpha/ when looking for transparent pixels,
and quit early, at least during this stage, if the async in
canceled.
When an async that was created through
gimp_parallel_run_async[_full](), and whose execution is still
pending, is being waited-upon, maximize its priority so that it
gets executed before all other pending asyncs.
Note that we deliberately don't simply execute the async in the
calling thread in this case, to allow timed-waits to fail (which is
especially important for gimp_wait()).
The "update" signal on drawable or projection can actually be emitted
many times for a single painting event. Just add new signals ("painted"
on GimpDrawable and "rendered" on GimpProjection) which are emitted once
for a single update (from user point of view), at the end, after actual
rendering is done (i.e. after the various "update" signals).
Also better support the sample merge vs current drawable paths for
bucket fill.
Since commit b00037b850, erosion size is not used anymore, as this step
has been removed, and the end point detection now uses local thickness
of strokes instead.
Previous algorithm was relying on strokes of small radius to detect
points of interest. In order to work with various sizes of strokes, we
were computing an approximate median stroke thickness, then using this
median value to erode the binary line art.
Unfortunately this was not working that well for very fat strokes, and
also it was potentially opening holes in the line art. These holes were
usually filled back later during the spline and segment creations. Yet
it could not be totally assured, and we had some experience where color
filling would leak out of line art zones without any holes from the
start (which is the opposite of where this new feature is supposed to
go)!
This updated code computes instead some radius estimate for every border
point of strokes, and the detection of end points uses this information
of local thickness. Using local approximation is obviously much more
accurate than a single thickness approximation for the whole drawing,
while not making the processing slower (in particular since we got rid
of the quite expensive erosion step).
This fixes the aforementionned issues (i.e. work better with fat strokes
and do not create invisible holes in closed lines), and also is not
subject to the problem of mistakenly increasing median radius when you
color fill in sample merge mode (i.e. using also the color data in the
input)!
Also it is algorithmically less intensive, which is obviously very good.
This new version of the algorithm is a reimplementation in GIMP of new
code by Sébastien Fourey and David Tschumperlé, as a result of our many
discussions and tests with the previous algorithm.
Note that we had various tests, experiments and propositions to try and
improve these issues. Skeletonization was evoked, but would have been
most likely much slower. Simpler erosion based solely on local radius
was also a possibility but it may have created too much noise (skeleton
barbs), with high curvature, hence may have created too many new
artificial endpoints.
This new version also creates more endpoints though (and does not seem
to lose any previously detected endpoints), which may be a bit annoying
yet acceptable with the new bucket fill stroking interaction. In any
case, on simple examples, it seems to do the job quite well.
The parallel_distribute() family of functions has been migrated to
GEGL. Remove the gimp_parallel_distribute() functions from
gimp-parallel, and replace all uses of these functions with the
corresponding gegl_parallel_distrubte() functions.
I have not added all the options for this new tool yet, but this sets
the base. I also added a bit of TODO for several places where we need to
make it settable, in particular the fuzzy select tool, but also simply
PDB calls (this will need to be a PDB context settings.
Maybe also I will want to make some LineArtOptions struct in order not
to have infinite list of parameters to functions. And at some point, it
may also be worth splitting a bit process with other type of
selection/fill (since they barely share any settings anyway).
Finally I take the opportunity to document a little more the parameters
to gimp_lineart_close(), which can still be improved later (I should
have documented these straight away when I re-implemented this all from
G'Mic code, as I am a bit fuzzy on some details now and will need to
re-understand code).
Rather than just having a click interaction, let's allow to "paint" with
the bucket fill. This is very useful for the new "line art" colorization
since it tends to over-segment the drawing. Therefore being able to
stroke through the canvas (rather than click, up, move, click, etc.)
makes the process much simpler. This is also faster since we don't have
to recompute the line art while a filling is in-progress.
Note that this new behavior is not only for the line art mode, but also
any other fill criterion, for which it can also be useful.
Last change of behavior as a side effect: it is possible to cancel the
tool changes the usual GIMP way (for instance by right clicking when
releasing the mouse button).
Right now, this is mostly meaningless as it is still done sequentially.
But I am mostly preparing the field to pre-compute the line art as
background thread.
The older labelling based off CImg code was broken (probably because of
me, from my port). Anyway I realized what it was trying to do was too
generic, which is why we had to fix the result later (labeling all
non-stroke pixels as 0, etc.). Instead I just implemented a simpler
labelling and only look for stroke regions. It still over-label a bit
the painting but a lot less, and is much faster.
I don't actually need to loop through borders first. This is what the
abyss policy is for, and I can simply check the iterator position to
verify I am within buffer boundaries or not.
This simplifies the code a lot.
We actually don't need to compute distance map. I just make the simplest
priority map, with 1 any line art pixel and 0 any other pixel (in mask
or not), lowest priority being propagated first.
And let the flooding begin!