Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

Posts under General subtopic

Post

Replies

Boosts

Views

Activity

Display 17 PROseries
Dear colleagues, when will you add the ability to manually adjust the display's color temperature? Everyone is familiar with the color rendering issue on the 17 series. Many complain about the display's yellowish tint. I'm one of those people, with a G9N panel, but I can't get whites right; there's a persistent yellow tint. TrueTone only solves this problem under cool, white lighting conditions. So, the display might work as intended, but how can I make it work consistently? If there were a way to manually adjust TrueTone, many users wouldn't be so upset when buying a new device, fearing the display would be yellow. I'd like users to be able to choose their preferred color, warm or cool, and have a scale to adjust it! This would solve the yellowish tint issue on 17 series displays! Thank you.
0
0
85
3w
How to apply a geometry modifier to a VideoMaterial in RealityKit?
I have a model that uses a video material as the surface shader and I need to also use a geometry modifier on the material. This seemed like it would be promising (adapted from https://developer.apple.com/wwdc21/10075 ~5m 50s). // Did the setup for the video and AVPlayer eventually leading me to let videoMaterial = VideoMaterial(avPlayer: avPlayer) // Assign the material to the entity entity.model!.materials = [videoMaterial] // The part shown in WWDC: Set up the library and geometry modifier before, so now try to map the new custom material to the video material entity.model!.materials = entity.model!.materials.map { baseMaterial in       try! CustomMaterial(from: baseMaterial, geometryModifier: geometryModifier)     } But, I get the following error Thread 1: Fatal error: 'try!' expression unexpectedly raised an error: RealityFoundation.CustomMaterialError.defaultSurfaceShaderForMaterialNotFound How can I apply a geometry modifier to a VideoMaterial? Or, if I can't do that, is there an easy way to route the AVPlayer video data into the baseColor of CustomMaterial?
2
0
1.2k
Dec ’25
Utilizing Point Cloud data from `ObjectCaptureSession` in WWDC23
I am currently developing a mobile and server-side application using the new ObjectCaptureSession on iOS and PhotogrammetrySession on MacOS. I have two questions regarding the newly updated APIs. From WWDC23 session: "Meet Object Capture for iOS", I know that the Object Capture API uses Point Cloud data captured from iPhone LiDAR sensor. I want to know how to use the Point Cloud data captured on iPhone ObjectCaptureSession and use it to create 3D models on PhotogrammetrySession on MacOS. From the example code from WWDC21, I know that the PhotogrammetrySession utilizes depth map from captured photo images by embedding it into the HEIC image and use those data to create a 3D asset on PhotogrammetrySession on MacOS. I would like to know if Point Cloud data is also embedded into the image to be used during 3D reconstruction and if not, how else the Point Cloud data is inserted to be used during reconstruction. Another question is, I know that Point Cloud data is returned as a result from request to the PhtogrammetrySession.Request. I would like to know if this PointCloud data is the same set of data captured during ObjectCaptureSession from WWDC23 that is used to create ObjectCapturePointCloudView. Thank you to everyone for the help in advance. It's a real pleasure to be developing with all the updates to RealityKit and the Object Capture API.
6
0
2.3k
Jul ’25
vImageConverter_CreateWithCGImageFormat Fails with kvImageInvalidImageFormat When Trying to Convert CMYK to RGB
So I get JPEG data in my app. Previously I was using the higher level NSBitmapImageRep API and just feeding the JPEG data to it. But now I've noticed on Sonoma If I get a JPEG in the CMYK color space the NSBitmapImageRep renders mostly black and is corrupted. So I'm trying to drop down to the lower level APIs. Specifically I grab a CGImageRef and and trying to use the Accelerate API to convert it to another format (to hopefully workaround the issue... CGImageRef sourceCGImage = `CGImageCreateWithJPEGDataProvider(jpegDataProvider,` NULL, shouldInterpolate, kCGRenderingIntentDefault); Now I use vImageConverter_CreateWithCGImageFormat... with the following values for source and destination formats: Source format: (derived from sourceCGImage) bitsPerComponent = 8 bitsPerPixel = 32 colorSpace = (kCGColorSpaceICCBased; kCGColorSpaceModelCMYK; Generic CMYK Profile) bitmapInfo = kCGBitmapByteOrderDefault version = 0 decode = 0x000060000147f780 renderingIntent = kCGRenderingIntentDefault Destination format: bitsPerComponent = 8 bitsPerPixel = 24 colorSpace = (DeviceRBG) bitmapInfo = 8197 version = 0 decode = 0x0000000000000000 renderingIntent = kCGRenderingIntentDefault But vImageConverter_CreateWithCGImageFormat fails with kvImageInvalidImageFormat. Now if I change the destination format to use 32 bitsPerpixel and use alpha in the bitmap info the vImageConverter_CreateWithCGImageFormat does not return an error but I get a black image just like NSBitmapImageRep
14
0
1.5k
Aug ’25
Fullscreen detection using Core Graphic
Hi, I am trying to detect if all the screen are in fullscreen mode. The current approach is to get all windows' information from CGWindowListCopyWindowInfo and then compare the frame and coordinate with the frame of NSScreen.screens. However, there is a problem, the y position of the window seems to be relative to the screen. As it is not absolute position, I cannot compare it with the coordinate of the screen. Does anyone know if there are other information that I can use? Or is there a way to retrieve the absolute position or screen ID from the GCWIndow object?
2
0
220
Apr ’25
my game is very big when I upload my game in the app store
i have a game that i upload it in the app store that my game size is 3 gigaByte but when I download it, it show that the really size is about 100 megaByte, i upload the game in google app is given me the real size, so the problem i think is when it get out the xcode, maybe some one can give me i clue for what is going on. my game was made by unity2020. if that helps.
1
0
201
Apr ’25
Game Rejected as Spam
My app is being rejected and all I'm being told is that it is spam. I've tried improving various aspects of the game, but I just receive the same copy and paste rejection message each time. I have no idea if I'm moving in the right direction or what part of my game needs to be changed or improved. Is there a game quality benchmark document or some kind of resource I can use to better understand why my game is being rejected and how to bring it to a level that meets apple's standards.
1
0
119
Apr ’25
How to Enable Game Mode
What is Game Mode? Game Mode optimizes your gaming experience by giving your game the highest priority access to your CPU and GPU, lowering usage for background tasks. And it doubles the Bluetooth sampling rate, which reduces input latency and audio latency for wireless accessories like game controllers and AirPods. See Use Game Mode on Mac See Port advanced games to Apple platforms How can I enable Game Mode in my game? Add the Supports Game Mode property (GCSupportsGameMode) to your game’s Info.plist and set to true Correctly identify your game’s Application Category with LSApplicationCategoryType (also Info.plist) Note: Enabling Game Mode makes your game eligible but is not a guarantee; the OS decides if it is ok to enable Game Mode at runtime An app that enables Game Mode but isn’t a game will be rejected by App Review. How can I disable Game Mode? Set GCSupportsGameMode to false. Note: On Mac Game Mode is automatically disabled if the game isn’t running full screen.
1
0
567
Jul ’25
GCController.shouldMonitorBackgroundEvents = true broken?
I am suspecting that setting GCController.shouldMonitorBackgroundEvents = true does not actually make the game controllers inputs accessible to the app when it is in the background. About this value the official documentation says: A Boolean value that indicates whether the app needs to respond to controller events when it isn’t the frontmost app. Now the behavior is that when the app is in focus the users inputs do get correctly recognized but as soon as the app enters the background no inputs get recognized. The controller does not get reported as disconnecting and still works for example in launchpad. I am sure that about 2 months ago when I first used this it did work as one would expect. I also have seen that an app which lets users execute certain actions using their controller has stoped working recently, adding to my suspicion of the feature being broken. Here is a minimum reproducible example: import SwiftUI import GameController @main struct TestingControllerConnectionApp: App { @NSApplicationDelegateAdaptor(AppDelegate.self) var appDelegate var body: some Scene { WindowGroup { ContentView() } } } class AppDelegate: NSObject, NSApplicationDelegate { var statusItem: NSStatusItem? var controller: GCController? func applicationDidFinishLaunching(_ notification: Notification) { setupMenuBar() GCController.shouldMonitorBackgroundEvents = true NotificationCenter.default.addObserver( self, selector: #selector(controllerDidConnect), name: .GCControllerDidConnect, object: nil ) NotificationCenter.default.addObserver( self, selector: #selector(controllerDidDisconnect), name: .GCControllerDidDisconnect, object: nil ) } @objc private func setupMenuBar() { let menu = NSMenu() menu.addItem(NSMenuItem(title: "Quit", action: #selector(quitApp), keyEquivalent: "q")) statusItem = NSStatusBar.system.statusItem(withLength: NSStatusItem.variableLength) statusItem?.button?.image = NSImage(resource: .controllerBar) statusItem?.menu = menu } @objc private func quitApp() { NSApp.terminate(nil) } @objc private func controllerDidConnect(_ notification: Notification) { if let controller = notification.object as? GCController { print("Controller connected") self.controller = controller if let gamepad = controller.extendedGamepad { gamepad.buttonA.pressedChangedHandler = { _, _, pressed in print("Button A pressed: \(pressed)") } } } } @objc private func controllerDidDisconnect(_ notification: Notification) { print("Controller disconnected") } } This is created in a completely fresh Xcode project and NSHumanInterfaceDeviceUsageDescription has been added. I am using a PS5 Controller and a Mac running MacOS 15.4.1 which has been restarted and only Xcode and the app have been opened. I have tested this with setting a multitude of different entitlements and capabilities including: NSHumanInterfaceDeviceUsageDescription Supports Controller User Interaction Required background modes -> App communicates with an accessory com.apple.security.device.bluetooth com.apple.security.device.hid com.apple.security.device.usb I have also set this value at different points in the code with no change of effect. Does anybody see if there is any fault in my code or my understanding of the effect of the value 'shouldMonitorBackgroundEvents'? Or is this the functionality actually being broken on Apples part?
1
0
212
Apr ’25
How to obtain frame rate for iOS proMotion devices
Due to the release of ProMotion devices, the system may switch frame rates in certain scenarios, resulting in the loss of reference value for data collected through CADisplayLink callbacks at a fixed 60Hz frame rate. We cannot distinguish whether the slow callback of CADisplayLink is due to a stutter or a system switch in frame rate. I know Hitch Time Ratio, but I can't use this scheme for some reasons. How can I distinguish between stuck and frame rate gear shift in CADisplaylink callback? In iOS 15, CADisplayLink.preferredFrameRateRange.preferred always returns 0, while minimum and maximum do change. Can I use these minimum and maximum range values as criteria to distinguish between frame rate switching and stuttering?
1
0
158
May ’25
Embedded links not clickable in PDFs for iOS devices
I have a SPFx React application where I am printing the HTML page content using the javascript default window.print() functionality. Once I save the page as pdf from the print preview window and open it using Adobe Acrobat, the links(for eg -> Google) within the content are not clickable and appearing as plain text. I have tried to print random pages post searching with any keywords in Google and saved the files as pdfs, but, unfortunately, the links are still not clickable there as well. To check whether it is an Adobe Acrobat issue, I have performed the same print functionality from Android devices and shared the pdf file across the iOS devices and in that case, when opened using Adobe Acrobat, the links are appearing to be clickable. I am wondering whether it is something related to how the default print functionality works for iPadOS and iOS devices. Any insights on this would be really helpful. Thanks!!! Note: The links are clickable for MacOS as well as for Windows. #ios #ipados #javascript #spfx #react
2
0
147
May ’25
Trouble with MDLMesh.newBox()
I'm trying to build an MDLMesh then add normals let mdlMesh = MDLMesh.newBox(withDimensions: SIMD3<Float>(1, 1, 1), segments: SIMD3<UInt32>(2, 2, 2), geometryType: MDLGeometryType.triangles, inwardNormals:false, allocator: allocator) mdlMesh.addNormals(withAttributeNamed: MDLVertexAttributeNormal, creaseThreshold: 0) When I render the mesh, some normals are (0,0,0). I don't know if the problem is in the mesh, or in the conversion to MTKMesh. Is there a way to examine an MDLMesh with the geometry viewer? When I look at the variable values for my mdlMesh I get this: Not too useful. I don't know how to track down the normals. What's the best way to find out where the normals getting broken?
1
0
173
May ’25
Does OLED burn-in affect iPad Pro displays, and should I implement a screen saver in my always-on app?
Hi everyone, I’m developing an iPad app that will be running continuously with the screen always on — similar to a restaurant ordering system. I understand that some of the newer iPad Pro models are equipped with OLED displays. I'm concerned about the potential risk of screen burn-in due to static UI elements being displayed for extended periods. Does burn-in occur on the OLED iPad Pro models under such usage? Would it be advisable to implement a screen saver or periodically animate/change parts of the UI to prevent this? Any insights or best practices would be greatly appreciated. Thank you!
1
0
141
Jun ’25
CGContext PDF/A intents
let dic : [AnyHashable:Any] = [ kCGPDFXRegistryName: "http://www.color.org" as CFString, kCGPDFXOutputConditionIdentifier: "FOGRA43" as CFString, kCGPDFContextOutputIntent: "GTS_PDFX" as CFString, kCGPDFXOutputIntentSubtype: "GTS_PDFX" as CFString, kCGPDFContextCreateLinearizedPDF: "" as CFString, kCGPDFContextCreatePDFA: "" as CFString, kCGPDFContextAuthor: "Placeholder" as CFString, kCGPDFContextCreator: "Placeholder" as CFString ] Hello, Now I would like to export my PDF's as PDF/A. In my opinion, there is also the right option for this under Core Graphics. Unfortunately, the documentation does not show what is 'kCGPDFContextCreatePDFA' or 'kCGPDFContextLinearizedPDF' for a stringvalue is required. What I have already tried: GTS_PDFA1 , PDF/A-1, true as CFString. (Above my CFDictionary. ...Author e.g are working perfectly.) In the Finder you can see these two options, which I would also like to implement in my app. Thank you in advance!
1
0
198
Jun ’25
What good is NSBitmapFormatAlphaNonpremultiplied?
If I create a bitmap image and then try to get ready to draw into it, like so: NSBitmapImageRep* newRep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes: nullptr pixelsWide: 128 pixelsHigh: 128 bitsPerSample: 8 samplesPerPixel: 4 hasAlpha: YES isPlanar: NO colorSpaceName: NSDeviceRGBColorSpace bitmapFormat: NSBitmapFormatAlphaNonpremultiplied | NSBitmapFormatThirtyTwoBitBigEndian bytesPerRow: 4 * 128 bitsPerPixel: 32]; [NSGraphicsContext setCurrentContext: [NSGraphicsContext graphicsContextWithBitmapImageRep: newRep]]; then the log shows this error: CGBitmapContextCreate: unsupported parameter combination: RGB 8 bits/component, integer 512 bytes/row kCGImageAlphaLast kCGImageByteOrderDefault kCGImagePixelFormatPacked Valid parameters for RGB color space model are: 16 bits per pixel, 5 bits per component, kCGImageAlphaNoneSkipFirst 32 bits per pixel, 8 bits per component, kCGImageAlphaNoneSkipFirst 32 bits per pixel, 8 bits per component, kCGImageAlphaNoneSkipLast 32 bits per pixel, 8 bits per component, kCGImageAlphaPremultipliedFirst 32 bits per pixel, 8 bits per component, kCGImageAlphaPremultipliedLast 32 bits per pixel, 10 bits per component, kCGImageAlphaNone|kCGImagePixelFormatRGBCIF10|kCGImageByteOrder16Little 64 bits per pixel, 16 bits per component, kCGImageAlphaPremultipliedLast 64 bits per pixel, 16 bits per component, kCGImageAlphaNoneSkipLast 64 bits per pixel, 16 bits per component, kCGImageAlphaPremultipliedLast|kCGBitmapFloatComponents|kCGImageByteOrder16Little 64 bits per pixel, 16 bits per component, kCGImageAlphaNoneSkipLast|kCGBitmapFloatComponents|kCGImageByteOrder16Little 128 bits per pixel, 32 bits per component, kCGImageAlphaPremultipliedLast|kCGBitmapFloatComponents 128 bits per pixel, 32 bits per component, kCGImageAlphaNoneSkipLast|kCGBitmapFloatComponents See Quartz 2D Programming Guide (available online) for more information. If I don't use NSBitmapFormatAlphaNonpremultiplied as part of the format, I don't get the error message. My question is, why does the constant NSBitmapFormatAlphaNonpremultiplied exist if you can't use it like this? If you're wondering why I wanted to do this: I want to extract the RGBA pixel data from an image, which might have non-premultiplied alpha. And elsewhere online, I saw advice that if you want to look at the pixels of an image, draw it into a bitmap whose format you know and look at those pixels. And I don't want the process of drawing to premultiply my alpha.
3
0
176
Jun ’25
Firebase not initializing in iOS Unreal Engine 5.6 app (using MetaHuman + FirebaseFeatures)
Hi everyone, I'm building a native iOS app using Unreal Engine 5.6 with Firebase for authentication and Firestore. The app uses a MetaHuman avatar and is meant to run as a standalone UE app on iPhone. I'm using this Firebase wrapper: 👉 https://pandoa.github.io/FirebaseFeatures/ I've followed all the steps, including: Adding GoogleService-Info.plist to the Xcode project and ensuring it’s in the correct target Calling FIRApp.configure() in AppDelegate Verifying the plist is bundled correctly However, the app crashes on launch, and Firebase does not initialize properly. Crash log shows: [FirebaseCore][I-COR000005] No app has been configured yet. Setup details: Unreal Engine: 5.6 (source build, macOS) iOS Deployment: 17.5 MetaHuman character packaged correctly and app launches fine without Firebase Has anyone here managed to get Firebase working inside a native Unreal Engine iOS app with this setup? I'd love to hear if there’s something I’m missing — maybe something with initialization timing or module loading? Thanks so much in advance 🙏
1
0
758
Jul ’25
Xcode compile stucks(stops) when add new source files or add functions to previous files
Hello, we are working on a iOS game project, as progress, the project grows larger and larger. Because we are using other game dependencies and libraries, here larger and larger refers to the whole project, and our source files integrated and compiled by Xcode are not many. Now, it seems we hit a bottleneck, when I add new files or functions to the previous files to implement a new feature, Xcode compile stucks(stops), it's Indexing | Initializing datastore forever, cannot produce a final build. macOS 15.1, Xcode 16.2 Can you provide any solutions to solve this problem? Also submitted Feedback ID #FB18432749
4
0
710
Aug ’25
CIBumpDistortion filter not working on my view
I'm trying to apply a CIBumpDistortion Core Image filter to a view that contains a UILabel (my storyLabel). The goal is to create a visual bump/magnifying glass effect over the text. However, despite my attempts, the filter doesn't seem to render at all. The view and the label appear as normal, with no distortion effect. I've tried adjusting the filter parameters and reviewing the view hierarchy, but without success. I also haven't been able to find clear documentation or examples for applying this filter to a UIView's layer. // // TVView.swift // Mistery // // Created by Joje on 31/07/25. // import CoreImage import CoreImage.CIFilterBuiltins import UIKit import AVFoundation final class TVView: UIView { // propriedades animacao texto private var textAnimationTimer: Timer? private var fullTextToAnimate: String = "" private var currentCharIndex: Int = 0 // propriedades video estatica private var player: AVQueuePlayer? private var playerLayer: AVPlayerLayer? private var playerLooper: AVPlayerLooper? var onNextButtonTap: () -> Void = {} // MARK: - Subviews // imagem da TV private(set) lazy var tvImageView: UIImageView = { let imageView = UIImageView() imageView.translatesAutoresizingMaskIntoConstraints = false imageView.image = UIImage(named: "tvFinal") imageView.contentMode = .scaleAspectFit return imageView }() // texto que passa dentro da TV private(set) lazy var storyLabel: UILabel = { let label = UILabel() label.translatesAutoresizingMaskIntoConstraints = false //label.backgroundColor = .gray label.textColor = .red label.font = UIFont(name: "MeltedMonster", size: 30) label.textAlignment = .left label.numberOfLines = 0 label.text = "" return label }() private(set) lazy var nextButton: UIButton = { let button = UIButton(type: .system) button.translatesAutoresizingMaskIntoConstraints = false //button.backgroundColor = .darkGray button.addTarget(self, action: #selector(didPressNextButton), for: .touchUpInside) return button }() // MARK: - Lifecycle override init(frame: CGRect) { super.init(frame: frame) backgroundColor = .black setupVideoPlayer() addSubviews() setupConstraints() } override func layoutSubviews() { super.layoutSubviews() playerLayer?.frame = tvImageView.frame.insetBy(dx: tvImageView.frame.width * 0.05, dy: tvImageView.frame.height * 0.18) setupFisheyeEffect() } private func setupFisheyeEffect() { // cria o filtro guard let filter = CIFilter(name: "CIBumpDistortion") else {return print("erro")} storyLabel.layer.shouldRasterize = true storyLabel.layer.rasterizationScale = UIScreen.main.scale // define os parametros filter.setDefaults() // centro do efeito let center = CIVector(x: storyLabel.bounds.midX, y: storyLabel.bounds.midY) filter.setValue(center, forKey: kCIInputCenterKey) // raio de distorção filter.setValue(storyLabel.bounds.width, forKey: kCIInputRadiusKey) // intensidade de distorção filter.setValue(7, forKey: kCIInputScaleKey) storyLabel.layer.filters = [filter] } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } // MARK: - Button actions @objc private func didPressNextButton() { onNextButtonTap() } @objc private func animateNextCharacter() { guard currentCharIndex < fullTextToAnimate.count else { textAnimationTimer?.invalidate() return } let currentTextIndex = fullTextToAnimate.index(fullTextToAnimate.startIndex, offsetBy: currentCharIndex) let partialText = String(fullTextToAnimate[...currentTextIndex]) storyLabel.text = partialText currentCharIndex += 1 } public func updateStoryText(with text: String) { textAnimationTimer?.invalidate() storyLabel.text = "" fullTextToAnimate = text currentCharIndex = 0 textAnimationTimer = Timer.scheduledTimer(timeInterval: 0.12, target: self, selector: #selector(animateNextCharacter), userInfo: nil, repeats: true) } // MARK: - Setup methods private func setupVideoPlayer() { guard let videoURL = Bundle.main.url(forResource: "static-video", withExtension: "mov") else { print("Erro: Não foi possível encontrar o arquivo de vídeo static-video.mov") return } let playerItem = AVPlayerItem(url: videoURL) player = AVQueuePlayer(playerItem: playerItem) // LINHA COM POSSIVEL ERRO playerLooper = AVPlayerLooper(player: player!, templateItem: playerItem) playerLayer = AVPlayerLayer(player: player) playerLayer?.videoGravity = .resizeAspectFill if let layer = playerLayer { self.layer.addSublayer(layer) } player?.play() } private func addSubviews() { self.addSubview(storyLabel) self.addSubview(tvImageView) self.addSubview(nextButton) } private func setupConstraints() { NSLayoutConstraint.activate([ // TV Image tvImageView.centerXAnchor.constraint(equalTo: centerXAnchor), tvImageView.centerYAnchor.constraint(equalTo: centerYAnchor), tvImageView.widthAnchor.constraint(equalTo: widthAnchor), // TV Text storyLabel.centerXAnchor.constraint(equalTo: tvImageView.centerXAnchor, constant: -50), storyLabel.centerYAnchor.constraint(equalTo: tvImageView.centerYAnchor, constant: -25), storyLabel.widthAnchor.constraint(equalTo: tvImageView.widthAnchor, multiplier: 0.35), storyLabel.heightAnchor.constraint(equalTo: tvImageView.heightAnchor, multiplier: 0.42), //TV Button nextButton.topAnchor.constraint(equalTo: tvImageView.centerYAnchor, constant: -25), nextButton.centerXAnchor.constraint(equalTo: self.centerXAnchor, constant: 190), nextButton.widthAnchor.constraint(equalToConstant: 100), nextButton.heightAnchor.constraint(equalToConstant: 160) ]) } } #Preview{ ViewController() }
3
0
192
Sep ’25
Low Power Mode on MacOS 26 Tahoe + Vsync fullscreen limits application to 30 fps
I'm experiencing a specific issue where when using any of the MacOS 26 Tahoe betas with Low Power Mode enabled and using Vsync in fullscreen, my application framerate gets limited to a hard 30 fps. I have not experienced this on any older OS. For example Low Power Mode on 13.6 Ventura with Vsync fullscreen lets my application run at full 60 fps without issues. Is this a bug or a change in behavior of Low Power Mode on Tahoe? My application is 3D, runs at 60 fps and is sensitive to tearing, so I need Vsync and it is mostly utilized in fullscreen. And Low Power Mode is a default for many Macs, so default experience on Tahoe currently is a halved 30 fps. However there also seems to be inconsistencies of on which machines this happens, but older OSes are always fine.
1
0
377
Aug ’25
Will OpenGL API and Drivers be removed after appleOS 26?
Hi, I am a Multimedia and Graphics researcher and I am wondering if OpenGL API and drivers will be removed after appleOS 26? macOS 26 iOS 26 iPadOS 26 visionOS 26 I am asking this because most of the libraries I use depends on OpenGL. Like CGAL, libigl, immediate mode ui, nanovg, nanogui, bullet physics. Transitioning from Vulkan and metal while using and learning those libraries is just not viable. I would like to ask you that. I am the sole developer and I just want to ask you that. Regards.
1
0
232
Aug ’25
Display 17 PROseries
Dear colleagues, when will you add the ability to manually adjust the display's color temperature? Everyone is familiar with the color rendering issue on the 17 series. Many complain about the display's yellowish tint. I'm one of those people, with a G9N panel, but I can't get whites right; there's a persistent yellow tint. TrueTone only solves this problem under cool, white lighting conditions. So, the display might work as intended, but how can I make it work consistently? If there were a way to manually adjust TrueTone, many users wouldn't be so upset when buying a new device, fearing the display would be yellow. I'd like users to be able to choose their preferred color, warm or cool, and have a scale to adjust it! This would solve the yellowish tint issue on 17 series displays! Thank you.
Replies
0
Boosts
0
Views
85
Activity
3w
How to apply a geometry modifier to a VideoMaterial in RealityKit?
I have a model that uses a video material as the surface shader and I need to also use a geometry modifier on the material. This seemed like it would be promising (adapted from https://developer.apple.com/wwdc21/10075 ~5m 50s). // Did the setup for the video and AVPlayer eventually leading me to let videoMaterial = VideoMaterial(avPlayer: avPlayer) // Assign the material to the entity entity.model!.materials = [videoMaterial] // The part shown in WWDC: Set up the library and geometry modifier before, so now try to map the new custom material to the video material entity.model!.materials = entity.model!.materials.map { baseMaterial in       try! CustomMaterial(from: baseMaterial, geometryModifier: geometryModifier)     } But, I get the following error Thread 1: Fatal error: 'try!' expression unexpectedly raised an error: RealityFoundation.CustomMaterialError.defaultSurfaceShaderForMaterialNotFound How can I apply a geometry modifier to a VideoMaterial? Or, if I can't do that, is there an easy way to route the AVPlayer video data into the baseColor of CustomMaterial?
Replies
2
Boosts
0
Views
1.2k
Activity
Dec ’25
Utilizing Point Cloud data from `ObjectCaptureSession` in WWDC23
I am currently developing a mobile and server-side application using the new ObjectCaptureSession on iOS and PhotogrammetrySession on MacOS. I have two questions regarding the newly updated APIs. From WWDC23 session: "Meet Object Capture for iOS", I know that the Object Capture API uses Point Cloud data captured from iPhone LiDAR sensor. I want to know how to use the Point Cloud data captured on iPhone ObjectCaptureSession and use it to create 3D models on PhotogrammetrySession on MacOS. From the example code from WWDC21, I know that the PhotogrammetrySession utilizes depth map from captured photo images by embedding it into the HEIC image and use those data to create a 3D asset on PhotogrammetrySession on MacOS. I would like to know if Point Cloud data is also embedded into the image to be used during 3D reconstruction and if not, how else the Point Cloud data is inserted to be used during reconstruction. Another question is, I know that Point Cloud data is returned as a result from request to the PhtogrammetrySession.Request. I would like to know if this PointCloud data is the same set of data captured during ObjectCaptureSession from WWDC23 that is used to create ObjectCapturePointCloudView. Thank you to everyone for the help in advance. It's a real pleasure to be developing with all the updates to RealityKit and the Object Capture API.
Replies
6
Boosts
0
Views
2.3k
Activity
Jul ’25
vImageConverter_CreateWithCGImageFormat Fails with kvImageInvalidImageFormat When Trying to Convert CMYK to RGB
So I get JPEG data in my app. Previously I was using the higher level NSBitmapImageRep API and just feeding the JPEG data to it. But now I've noticed on Sonoma If I get a JPEG in the CMYK color space the NSBitmapImageRep renders mostly black and is corrupted. So I'm trying to drop down to the lower level APIs. Specifically I grab a CGImageRef and and trying to use the Accelerate API to convert it to another format (to hopefully workaround the issue... CGImageRef sourceCGImage = `CGImageCreateWithJPEGDataProvider(jpegDataProvider,` NULL, shouldInterpolate, kCGRenderingIntentDefault); Now I use vImageConverter_CreateWithCGImageFormat... with the following values for source and destination formats: Source format: (derived from sourceCGImage) bitsPerComponent = 8 bitsPerPixel = 32 colorSpace = (kCGColorSpaceICCBased; kCGColorSpaceModelCMYK; Generic CMYK Profile) bitmapInfo = kCGBitmapByteOrderDefault version = 0 decode = 0x000060000147f780 renderingIntent = kCGRenderingIntentDefault Destination format: bitsPerComponent = 8 bitsPerPixel = 24 colorSpace = (DeviceRBG) bitmapInfo = 8197 version = 0 decode = 0x0000000000000000 renderingIntent = kCGRenderingIntentDefault But vImageConverter_CreateWithCGImageFormat fails with kvImageInvalidImageFormat. Now if I change the destination format to use 32 bitsPerpixel and use alpha in the bitmap info the vImageConverter_CreateWithCGImageFormat does not return an error but I get a black image just like NSBitmapImageRep
Replies
14
Boosts
0
Views
1.5k
Activity
Aug ’25
Fullscreen detection using Core Graphic
Hi, I am trying to detect if all the screen are in fullscreen mode. The current approach is to get all windows' information from CGWindowListCopyWindowInfo and then compare the frame and coordinate with the frame of NSScreen.screens. However, there is a problem, the y position of the window seems to be relative to the screen. As it is not absolute position, I cannot compare it with the coordinate of the screen. Does anyone know if there are other information that I can use? Or is there a way to retrieve the absolute position or screen ID from the GCWIndow object?
Replies
2
Boosts
0
Views
220
Activity
Apr ’25
my game is very big when I upload my game in the app store
i have a game that i upload it in the app store that my game size is 3 gigaByte but when I download it, it show that the really size is about 100 megaByte, i upload the game in google app is given me the real size, so the problem i think is when it get out the xcode, maybe some one can give me i clue for what is going on. my game was made by unity2020. if that helps.
Replies
1
Boosts
0
Views
201
Activity
Apr ’25
Game Rejected as Spam
My app is being rejected and all I'm being told is that it is spam. I've tried improving various aspects of the game, but I just receive the same copy and paste rejection message each time. I have no idea if I'm moving in the right direction or what part of my game needs to be changed or improved. Is there a game quality benchmark document or some kind of resource I can use to better understand why my game is being rejected and how to bring it to a level that meets apple's standards.
Replies
1
Boosts
0
Views
119
Activity
Apr ’25
How to Enable Game Mode
What is Game Mode? Game Mode optimizes your gaming experience by giving your game the highest priority access to your CPU and GPU, lowering usage for background tasks. And it doubles the Bluetooth sampling rate, which reduces input latency and audio latency for wireless accessories like game controllers and AirPods. See Use Game Mode on Mac See Port advanced games to Apple platforms How can I enable Game Mode in my game? Add the Supports Game Mode property (GCSupportsGameMode) to your game’s Info.plist and set to true Correctly identify your game’s Application Category with LSApplicationCategoryType (also Info.plist) Note: Enabling Game Mode makes your game eligible but is not a guarantee; the OS decides if it is ok to enable Game Mode at runtime An app that enables Game Mode but isn’t a game will be rejected by App Review. How can I disable Game Mode? Set GCSupportsGameMode to false. Note: On Mac Game Mode is automatically disabled if the game isn’t running full screen.
Replies
1
Boosts
0
Views
567
Activity
Jul ’25
GCController.shouldMonitorBackgroundEvents = true broken?
I am suspecting that setting GCController.shouldMonitorBackgroundEvents = true does not actually make the game controllers inputs accessible to the app when it is in the background. About this value the official documentation says: A Boolean value that indicates whether the app needs to respond to controller events when it isn’t the frontmost app. Now the behavior is that when the app is in focus the users inputs do get correctly recognized but as soon as the app enters the background no inputs get recognized. The controller does not get reported as disconnecting and still works for example in launchpad. I am sure that about 2 months ago when I first used this it did work as one would expect. I also have seen that an app which lets users execute certain actions using their controller has stoped working recently, adding to my suspicion of the feature being broken. Here is a minimum reproducible example: import SwiftUI import GameController @main struct TestingControllerConnectionApp: App { @NSApplicationDelegateAdaptor(AppDelegate.self) var appDelegate var body: some Scene { WindowGroup { ContentView() } } } class AppDelegate: NSObject, NSApplicationDelegate { var statusItem: NSStatusItem? var controller: GCController? func applicationDidFinishLaunching(_ notification: Notification) { setupMenuBar() GCController.shouldMonitorBackgroundEvents = true NotificationCenter.default.addObserver( self, selector: #selector(controllerDidConnect), name: .GCControllerDidConnect, object: nil ) NotificationCenter.default.addObserver( self, selector: #selector(controllerDidDisconnect), name: .GCControllerDidDisconnect, object: nil ) } @objc private func setupMenuBar() { let menu = NSMenu() menu.addItem(NSMenuItem(title: "Quit", action: #selector(quitApp), keyEquivalent: "q")) statusItem = NSStatusBar.system.statusItem(withLength: NSStatusItem.variableLength) statusItem?.button?.image = NSImage(resource: .controllerBar) statusItem?.menu = menu } @objc private func quitApp() { NSApp.terminate(nil) } @objc private func controllerDidConnect(_ notification: Notification) { if let controller = notification.object as? GCController { print("Controller connected") self.controller = controller if let gamepad = controller.extendedGamepad { gamepad.buttonA.pressedChangedHandler = { _, _, pressed in print("Button A pressed: \(pressed)") } } } } @objc private func controllerDidDisconnect(_ notification: Notification) { print("Controller disconnected") } } This is created in a completely fresh Xcode project and NSHumanInterfaceDeviceUsageDescription has been added. I am using a PS5 Controller and a Mac running MacOS 15.4.1 which has been restarted and only Xcode and the app have been opened. I have tested this with setting a multitude of different entitlements and capabilities including: NSHumanInterfaceDeviceUsageDescription Supports Controller User Interaction Required background modes -> App communicates with an accessory com.apple.security.device.bluetooth com.apple.security.device.hid com.apple.security.device.usb I have also set this value at different points in the code with no change of effect. Does anybody see if there is any fault in my code or my understanding of the effect of the value 'shouldMonitorBackgroundEvents'? Or is this the functionality actually being broken on Apples part?
Replies
1
Boosts
0
Views
212
Activity
Apr ’25
How to obtain frame rate for iOS proMotion devices
Due to the release of ProMotion devices, the system may switch frame rates in certain scenarios, resulting in the loss of reference value for data collected through CADisplayLink callbacks at a fixed 60Hz frame rate. We cannot distinguish whether the slow callback of CADisplayLink is due to a stutter or a system switch in frame rate. I know Hitch Time Ratio, but I can't use this scheme for some reasons. How can I distinguish between stuck and frame rate gear shift in CADisplaylink callback? In iOS 15, CADisplayLink.preferredFrameRateRange.preferred always returns 0, while minimum and maximum do change. Can I use these minimum and maximum range values as criteria to distinguish between frame rate switching and stuttering?
Replies
1
Boosts
0
Views
158
Activity
May ’25
Embedded links not clickable in PDFs for iOS devices
I have a SPFx React application where I am printing the HTML page content using the javascript default window.print() functionality. Once I save the page as pdf from the print preview window and open it using Adobe Acrobat, the links(for eg -> Google) within the content are not clickable and appearing as plain text. I have tried to print random pages post searching with any keywords in Google and saved the files as pdfs, but, unfortunately, the links are still not clickable there as well. To check whether it is an Adobe Acrobat issue, I have performed the same print functionality from Android devices and shared the pdf file across the iOS devices and in that case, when opened using Adobe Acrobat, the links are appearing to be clickable. I am wondering whether it is something related to how the default print functionality works for iPadOS and iOS devices. Any insights on this would be really helpful. Thanks!!! Note: The links are clickable for MacOS as well as for Windows. #ios #ipados #javascript #spfx #react
Replies
2
Boosts
0
Views
147
Activity
May ’25
Trouble with MDLMesh.newBox()
I'm trying to build an MDLMesh then add normals let mdlMesh = MDLMesh.newBox(withDimensions: SIMD3<Float>(1, 1, 1), segments: SIMD3<UInt32>(2, 2, 2), geometryType: MDLGeometryType.triangles, inwardNormals:false, allocator: allocator) mdlMesh.addNormals(withAttributeNamed: MDLVertexAttributeNormal, creaseThreshold: 0) When I render the mesh, some normals are (0,0,0). I don't know if the problem is in the mesh, or in the conversion to MTKMesh. Is there a way to examine an MDLMesh with the geometry viewer? When I look at the variable values for my mdlMesh I get this: Not too useful. I don't know how to track down the normals. What's the best way to find out where the normals getting broken?
Replies
1
Boosts
0
Views
173
Activity
May ’25
Does OLED burn-in affect iPad Pro displays, and should I implement a screen saver in my always-on app?
Hi everyone, I’m developing an iPad app that will be running continuously with the screen always on — similar to a restaurant ordering system. I understand that some of the newer iPad Pro models are equipped with OLED displays. I'm concerned about the potential risk of screen burn-in due to static UI elements being displayed for extended periods. Does burn-in occur on the OLED iPad Pro models under such usage? Would it be advisable to implement a screen saver or periodically animate/change parts of the UI to prevent this? Any insights or best practices would be greatly appreciated. Thank you!
Replies
1
Boosts
0
Views
141
Activity
Jun ’25
CGContext PDF/A intents
let dic : [AnyHashable:Any] = [ kCGPDFXRegistryName: "http://www.color.org" as CFString, kCGPDFXOutputConditionIdentifier: "FOGRA43" as CFString, kCGPDFContextOutputIntent: "GTS_PDFX" as CFString, kCGPDFXOutputIntentSubtype: "GTS_PDFX" as CFString, kCGPDFContextCreateLinearizedPDF: "" as CFString, kCGPDFContextCreatePDFA: "" as CFString, kCGPDFContextAuthor: "Placeholder" as CFString, kCGPDFContextCreator: "Placeholder" as CFString ] Hello, Now I would like to export my PDF's as PDF/A. In my opinion, there is also the right option for this under Core Graphics. Unfortunately, the documentation does not show what is 'kCGPDFContextCreatePDFA' or 'kCGPDFContextLinearizedPDF' for a stringvalue is required. What I have already tried: GTS_PDFA1 , PDF/A-1, true as CFString. (Above my CFDictionary. ...Author e.g are working perfectly.) In the Finder you can see these two options, which I would also like to implement in my app. Thank you in advance!
Replies
1
Boosts
0
Views
198
Activity
Jun ’25
What good is NSBitmapFormatAlphaNonpremultiplied?
If I create a bitmap image and then try to get ready to draw into it, like so: NSBitmapImageRep* newRep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes: nullptr pixelsWide: 128 pixelsHigh: 128 bitsPerSample: 8 samplesPerPixel: 4 hasAlpha: YES isPlanar: NO colorSpaceName: NSDeviceRGBColorSpace bitmapFormat: NSBitmapFormatAlphaNonpremultiplied | NSBitmapFormatThirtyTwoBitBigEndian bytesPerRow: 4 * 128 bitsPerPixel: 32]; [NSGraphicsContext setCurrentContext: [NSGraphicsContext graphicsContextWithBitmapImageRep: newRep]]; then the log shows this error: CGBitmapContextCreate: unsupported parameter combination: RGB 8 bits/component, integer 512 bytes/row kCGImageAlphaLast kCGImageByteOrderDefault kCGImagePixelFormatPacked Valid parameters for RGB color space model are: 16 bits per pixel, 5 bits per component, kCGImageAlphaNoneSkipFirst 32 bits per pixel, 8 bits per component, kCGImageAlphaNoneSkipFirst 32 bits per pixel, 8 bits per component, kCGImageAlphaNoneSkipLast 32 bits per pixel, 8 bits per component, kCGImageAlphaPremultipliedFirst 32 bits per pixel, 8 bits per component, kCGImageAlphaPremultipliedLast 32 bits per pixel, 10 bits per component, kCGImageAlphaNone|kCGImagePixelFormatRGBCIF10|kCGImageByteOrder16Little 64 bits per pixel, 16 bits per component, kCGImageAlphaPremultipliedLast 64 bits per pixel, 16 bits per component, kCGImageAlphaNoneSkipLast 64 bits per pixel, 16 bits per component, kCGImageAlphaPremultipliedLast|kCGBitmapFloatComponents|kCGImageByteOrder16Little 64 bits per pixel, 16 bits per component, kCGImageAlphaNoneSkipLast|kCGBitmapFloatComponents|kCGImageByteOrder16Little 128 bits per pixel, 32 bits per component, kCGImageAlphaPremultipliedLast|kCGBitmapFloatComponents 128 bits per pixel, 32 bits per component, kCGImageAlphaNoneSkipLast|kCGBitmapFloatComponents See Quartz 2D Programming Guide (available online) for more information. If I don't use NSBitmapFormatAlphaNonpremultiplied as part of the format, I don't get the error message. My question is, why does the constant NSBitmapFormatAlphaNonpremultiplied exist if you can't use it like this? If you're wondering why I wanted to do this: I want to extract the RGBA pixel data from an image, which might have non-premultiplied alpha. And elsewhere online, I saw advice that if you want to look at the pixels of an image, draw it into a bitmap whose format you know and look at those pixels. And I don't want the process of drawing to premultiply my alpha.
Replies
3
Boosts
0
Views
176
Activity
Jun ’25
Firebase not initializing in iOS Unreal Engine 5.6 app (using MetaHuman + FirebaseFeatures)
Hi everyone, I'm building a native iOS app using Unreal Engine 5.6 with Firebase for authentication and Firestore. The app uses a MetaHuman avatar and is meant to run as a standalone UE app on iPhone. I'm using this Firebase wrapper: 👉 https://pandoa.github.io/FirebaseFeatures/ I've followed all the steps, including: Adding GoogleService-Info.plist to the Xcode project and ensuring it’s in the correct target Calling FIRApp.configure() in AppDelegate Verifying the plist is bundled correctly However, the app crashes on launch, and Firebase does not initialize properly. Crash log shows: [FirebaseCore][I-COR000005] No app has been configured yet. Setup details: Unreal Engine: 5.6 (source build, macOS) iOS Deployment: 17.5 MetaHuman character packaged correctly and app launches fine without Firebase Has anyone here managed to get Firebase working inside a native Unreal Engine iOS app with this setup? I'd love to hear if there’s something I’m missing — maybe something with initialization timing or module loading? Thanks so much in advance 🙏
Replies
1
Boosts
0
Views
758
Activity
Jul ’25
Xcode compile stucks(stops) when add new source files or add functions to previous files
Hello, we are working on a iOS game project, as progress, the project grows larger and larger. Because we are using other game dependencies and libraries, here larger and larger refers to the whole project, and our source files integrated and compiled by Xcode are not many. Now, it seems we hit a bottleneck, when I add new files or functions to the previous files to implement a new feature, Xcode compile stucks(stops), it's Indexing | Initializing datastore forever, cannot produce a final build. macOS 15.1, Xcode 16.2 Can you provide any solutions to solve this problem? Also submitted Feedback ID #FB18432749
Replies
4
Boosts
0
Views
710
Activity
Aug ’25
CIBumpDistortion filter not working on my view
I'm trying to apply a CIBumpDistortion Core Image filter to a view that contains a UILabel (my storyLabel). The goal is to create a visual bump/magnifying glass effect over the text. However, despite my attempts, the filter doesn't seem to render at all. The view and the label appear as normal, with no distortion effect. I've tried adjusting the filter parameters and reviewing the view hierarchy, but without success. I also haven't been able to find clear documentation or examples for applying this filter to a UIView's layer. // // TVView.swift // Mistery // // Created by Joje on 31/07/25. // import CoreImage import CoreImage.CIFilterBuiltins import UIKit import AVFoundation final class TVView: UIView { // propriedades animacao texto private var textAnimationTimer: Timer? private var fullTextToAnimate: String = "" private var currentCharIndex: Int = 0 // propriedades video estatica private var player: AVQueuePlayer? private var playerLayer: AVPlayerLayer? private var playerLooper: AVPlayerLooper? var onNextButtonTap: () -> Void = {} // MARK: - Subviews // imagem da TV private(set) lazy var tvImageView: UIImageView = { let imageView = UIImageView() imageView.translatesAutoresizingMaskIntoConstraints = false imageView.image = UIImage(named: "tvFinal") imageView.contentMode = .scaleAspectFit return imageView }() // texto que passa dentro da TV private(set) lazy var storyLabel: UILabel = { let label = UILabel() label.translatesAutoresizingMaskIntoConstraints = false //label.backgroundColor = .gray label.textColor = .red label.font = UIFont(name: "MeltedMonster", size: 30) label.textAlignment = .left label.numberOfLines = 0 label.text = "" return label }() private(set) lazy var nextButton: UIButton = { let button = UIButton(type: .system) button.translatesAutoresizingMaskIntoConstraints = false //button.backgroundColor = .darkGray button.addTarget(self, action: #selector(didPressNextButton), for: .touchUpInside) return button }() // MARK: - Lifecycle override init(frame: CGRect) { super.init(frame: frame) backgroundColor = .black setupVideoPlayer() addSubviews() setupConstraints() } override func layoutSubviews() { super.layoutSubviews() playerLayer?.frame = tvImageView.frame.insetBy(dx: tvImageView.frame.width * 0.05, dy: tvImageView.frame.height * 0.18) setupFisheyeEffect() } private func setupFisheyeEffect() { // cria o filtro guard let filter = CIFilter(name: "CIBumpDistortion") else {return print("erro")} storyLabel.layer.shouldRasterize = true storyLabel.layer.rasterizationScale = UIScreen.main.scale // define os parametros filter.setDefaults() // centro do efeito let center = CIVector(x: storyLabel.bounds.midX, y: storyLabel.bounds.midY) filter.setValue(center, forKey: kCIInputCenterKey) // raio de distorção filter.setValue(storyLabel.bounds.width, forKey: kCIInputRadiusKey) // intensidade de distorção filter.setValue(7, forKey: kCIInputScaleKey) storyLabel.layer.filters = [filter] } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } // MARK: - Button actions @objc private func didPressNextButton() { onNextButtonTap() } @objc private func animateNextCharacter() { guard currentCharIndex < fullTextToAnimate.count else { textAnimationTimer?.invalidate() return } let currentTextIndex = fullTextToAnimate.index(fullTextToAnimate.startIndex, offsetBy: currentCharIndex) let partialText = String(fullTextToAnimate[...currentTextIndex]) storyLabel.text = partialText currentCharIndex += 1 } public func updateStoryText(with text: String) { textAnimationTimer?.invalidate() storyLabel.text = "" fullTextToAnimate = text currentCharIndex = 0 textAnimationTimer = Timer.scheduledTimer(timeInterval: 0.12, target: self, selector: #selector(animateNextCharacter), userInfo: nil, repeats: true) } // MARK: - Setup methods private func setupVideoPlayer() { guard let videoURL = Bundle.main.url(forResource: "static-video", withExtension: "mov") else { print("Erro: Não foi possível encontrar o arquivo de vídeo static-video.mov") return } let playerItem = AVPlayerItem(url: videoURL) player = AVQueuePlayer(playerItem: playerItem) // LINHA COM POSSIVEL ERRO playerLooper = AVPlayerLooper(player: player!, templateItem: playerItem) playerLayer = AVPlayerLayer(player: player) playerLayer?.videoGravity = .resizeAspectFill if let layer = playerLayer { self.layer.addSublayer(layer) } player?.play() } private func addSubviews() { self.addSubview(storyLabel) self.addSubview(tvImageView) self.addSubview(nextButton) } private func setupConstraints() { NSLayoutConstraint.activate([ // TV Image tvImageView.centerXAnchor.constraint(equalTo: centerXAnchor), tvImageView.centerYAnchor.constraint(equalTo: centerYAnchor), tvImageView.widthAnchor.constraint(equalTo: widthAnchor), // TV Text storyLabel.centerXAnchor.constraint(equalTo: tvImageView.centerXAnchor, constant: -50), storyLabel.centerYAnchor.constraint(equalTo: tvImageView.centerYAnchor, constant: -25), storyLabel.widthAnchor.constraint(equalTo: tvImageView.widthAnchor, multiplier: 0.35), storyLabel.heightAnchor.constraint(equalTo: tvImageView.heightAnchor, multiplier: 0.42), //TV Button nextButton.topAnchor.constraint(equalTo: tvImageView.centerYAnchor, constant: -25), nextButton.centerXAnchor.constraint(equalTo: self.centerXAnchor, constant: 190), nextButton.widthAnchor.constraint(equalToConstant: 100), nextButton.heightAnchor.constraint(equalToConstant: 160) ]) } } #Preview{ ViewController() }
Replies
3
Boosts
0
Views
192
Activity
Sep ’25
Low Power Mode on MacOS 26 Tahoe + Vsync fullscreen limits application to 30 fps
I'm experiencing a specific issue where when using any of the MacOS 26 Tahoe betas with Low Power Mode enabled and using Vsync in fullscreen, my application framerate gets limited to a hard 30 fps. I have not experienced this on any older OS. For example Low Power Mode on 13.6 Ventura with Vsync fullscreen lets my application run at full 60 fps without issues. Is this a bug or a change in behavior of Low Power Mode on Tahoe? My application is 3D, runs at 60 fps and is sensitive to tearing, so I need Vsync and it is mostly utilized in fullscreen. And Low Power Mode is a default for many Macs, so default experience on Tahoe currently is a halved 30 fps. However there also seems to be inconsistencies of on which machines this happens, but older OSes are always fine.
Replies
1
Boosts
0
Views
377
Activity
Aug ’25
Will OpenGL API and Drivers be removed after appleOS 26?
Hi, I am a Multimedia and Graphics researcher and I am wondering if OpenGL API and drivers will be removed after appleOS 26? macOS 26 iOS 26 iPadOS 26 visionOS 26 I am asking this because most of the libraries I use depends on OpenGL. Like CGAL, libigl, immediate mode ui, nanovg, nanogui, bullet physics. Transitioning from Vulkan and metal while using and learning those libraries is just not viable. I would like to ask you that. I am the sole developer and I just want to ask you that. Regards.
Replies
1
Boosts
0
Views
232
Activity
Aug ’25