Supports only setting custom sample locations in subpasses via
vkBeginRenderpass. Does not support setting custom sample locations via
vkCmdBindPipeline or vkCmdSetSampleLocationsEXT, although collects that
info for possible future enhancements.
- MVKPhysicalDevice track platform support and respond to property queries.
- MVKCmdBeginRenderPassBase collect subpass custom sample locations.
- MVKPipeline support dynamic state values beyond 31.
- MVKPipeline collect custom sample locations.
- Add MVKCmdSetSampleLocations to support vkCmdSetSampleLocations
to collect dynamic custom sample locations.
- MVKCommandEncoder support collecting custom sample positions from subpass
and dynamic, and set into MTLRenderPassDescriptor for each Metal render pass.
- MVKArrayRef add assignment operator.
- Add MVKPhysicalDeviceMetalFeatures::programmableSamplePositions.
- Update VK_MVK_MOLTENVK_SPEC_VERSION to version 34.
- MVKCommandBuffer.h remove obsolete comment documentation.
- Update Whats_New.md.
A small leak occurs if no existing IOSurface is provided to vkUseIOSurfaceMVK() because CoreFoundation objects returned from functions with Create in their name must be released with CFRelease()
In the interests of Single Source of Truth, OS version support is now populated
in MVKExtensions.def, and that info is used to validate each Vulkan extension
against OS version support for the functionality required by the extension, with
the default being unsupported, unless otherwise indicated in MVKExtensions.def.
- Add OS version info for each extension in MVKExtensions.def.
- mvkIsSupportedOnPlatform() checks every extension for OS version support,
not just a separately-populated list of OS version limitations (that defaulted
to supported, instead of unsupported).
- Visually clean up MVKExtensions.def for easier reading.
std::iterator is deprecated in C++17, which triggers multiple compilation warnings.
Update MVKSmallVector::iterator to explicitly specify iterator traits,
instead of subclassing from std::iterator.
Qualify use of std::remove() in mvkRemoveAllOccurances(),
to eliminate resolution ambiguity.
To better support pipeline layout compatibility between pipelines with differing
quantities of descriptor sets, move the buffer indexes used by implicit buffers to
the top end of the Metal buffer index range, below vertex and tessellation buffers.
MVKPipeline calculates implicit buffer indexes based on vertex and tessellation
buffers required by pipeline, instead of based on descriptors in MVKPipelineLayout.
MVKPipeline track buffer index counts consumed by MVKPipelineLayout, to validate
room for implicit buffers.
For pipeline layout compatibility, consume the Metal resource indexes in this order:
- Consume a fixed number of Metal buffer indexes for Metal argument buffers,
but only consume them if Metal argument buffers are not being used.
- Consume push constants Metal buffer index before descriptor set resources,
but only consume it if the stage uses push constants.
- Consume descriptor set bindings.
In MVKPipelineLayout, separate tracking resource counts from push constants
indexes, and move push constant indexes ahead of descriptor bindings.
In MVKPipeline, track which stages use push constants.
Remove unused and obsolete function declaration in MVKDescriptorSet.h.
For a resource object that can be retained by descriptors beyond its lifetime,
release memory resources when the object is destroyed by the app. This includes
objects of type MVKBuffer, MVKBufferView, MVKImageView, and MVKSampler.
When the app destroys an MVKBuffer, also detach from the MVKDeviceMemory,
to fix a potential race condition when the app updates the descriptor on
one thread while also freeing the MVKDeviceMemory on another thread.
MVKImageView guard against detached planes while in descriptor.
Add comment to clarify how destroy() is called from release().
For GPUs that round float clear colors down, a half-ULP adjustment is performed
on normalized formats. But this adjustment should not be performed on SRGB formats,
which Vulkan requires to be treated as linear, with the value managed by the app.
Ensure non-Apple GPU's enable memory barriers.
A previous commit inadvertently disabled GPU memory barriers.
Change tests for memory barriers to runtime test for Apple GPU, instead of
build-time test for Apple Silicon, to accommodate running on Rosetta2, and
refactor tests for Apple Silicon and OS version on some macOS GPU feature settings.
For a combined depth-stencil format in a MVKImageView attachment with
VK_IMAGE_ASPECT_STENCIL_BIT, the attachment format may have been swizzled
to a stencil-only format. In this case, we want to guard against an attempt
to store the non-existent depth component.
Pass MVKImageView attachment to MVKRenderPassAttachment::encodeStoreAction()
and MVKRenderPassAttachment::populateMTLRenderPassAttachmentDescriptor() to
check attachment depth format component.
Consolidate calls to MVKImageView::populateMTLRenderPassAttachmentDescriptor() by calling
it from within MVKRenderPassAttachment::populateMTLRenderPassAttachmentDescriptor().
When flattening shader inputs for stage_in, which are to be read from a buffer
that was populated as nested structs during an earlier stage, the structs will
be aligned according to C++ rules, which can affect the alignment of the first
member of the flattened input struct.
Add SPIRVShaderOutput::firstStructMemberAlignment to track the alignment
requirements of the first member of a nested structure, and recursively
determine the alignment of the first member of each nested output structure.
Move sizeOfOutput() from MVKPipeline.mm to SPIRVReflection.h,
rename to getShaderOutputSize(), and add getShaderOutputAlignment()
to extract member alignment.