Overview

Post

Replies

Boosts

Views

Activity

UITextField and UITextView abnormally popped up the network permission application interface
in iOS26.4, after installing the app for the first time, opening the app and clicking on the UITextField input box will trigger the system to pop up the network permission application interface. This issue did not exist before iOS 26.3, only in iOS 26.4. This is a fatal bug where the network permission request box should not pop up when the developer has not called the network related API.
4
0
433
4w
Approval of first in-App purchase
I am looking for some insight and an assist (planning to test in-app purchase with TestFlight) - all works in Xcode. I have created my very first in-App purchases App (I have other paid apps in App store) Under Distribution tab in the Monetization section i created the in-app purchases item. I completed all fields and the status is now changed from waiting fior metadata to ready to submit.. Per my readings the in-app purchase item would be reviewed with a new version of the App. I uploaded a new version, added sandbox tester and submitted it for review.. My App in TestFlight now shows status of Testing, but the in-App purchase still shows waiting for review. On the Distribution tab the "Add for Review" button is active but it seems a bit illogical for me to click it as the in-App purchases is the only completed item on my App's distribution tab (I am not ready to submit for review yet..) So as illogical as it appears, is this the way to get your first in-App purchase approved?
1
0
72
3w
What is the optimal number of records per shard?
Hello, I am currently developing a PIR server using the pir-server-example repository. We are anticipating a total of 10 million URLs for our dataset. In this context, what would be the optimal shard size (number of records per shard) to balance computational latency and communication overhead? Any advice or best practices for handling a dataset of this scale would be greatly appreciated. Thank you.
2
0
214
3w
Is continuous background GPS tracing during device idle allowed?
We want to implement continuous GPS tracking in a React Native iOS app for security purposes. We need the tracing in the following case scenarios: App is Terminated App is minimised (Not killed) App is open and device is put to sleep mode #Locked App is in minimised and device is put to sleep mode #Locked (sleep mode) Currently it works in following 2 scenarios: Working when the app open in foreground Works when the app is killed (Traces in background) We would like to understand: Is continuous background location tracking during device idle allowed in iOS ? If allowed, what is the recommended approach to ensure reliable tracking? Are there any specific configurations, permissions, or limitations (battery optimization, system restrictions) we should be aware of? We are using React-Native by transistersoft with background location updates enabled and required permissions configured. This use case is specifically for user safety and security tracking. Any guidance on best practices and platform limitations would be helpful.
1
0
276
4w
URL Filter Network Extension
Hello team, I am trying to find out a way to block urls in the chrome browser if it is found in local blocked list cache. I found URL Filter Network very much suitable for my requirement. But I see at multiple places that this solution is only for Enterprise level or MDM or supervised device. So can I run this for normal user ? as my targeting audience would be bank users. One more thing how can I test this in development environment if we need supervised devices and do we need special entitlement ? When trying to run sample project in the simulator then getting below error
15
0
529
3w
First-time notarization submissions stuck "In Progress" — two submissions, 15+ hours
This is my first time submitting an app for notarization. Both submissions have been stuck "In Progress" with no logs available. Body: This is my first time submitting an app for notarization. Both submissions have been stuck "In Progress" with no logs available. Submission 1: ID: 43ea68c1-5291-42c6-b0e1-3cacab4ca01a Submitted: 2026-04-09T02:05:34Z Status: In Progress (15+ hours) Submission 2: ID: 12ea49a0-64cf-495e-af7e-9aad5aabe30f Submitted: 2026-04-09T17:06:51Z Status: In Progress (1+ hour) Details: Team ID: PWTWN9N25D App: Native macOS SwiftUI app (arm64), ~84 MB zipped Signed with Developer ID Application certificate, Hardened Runtime enabled All embedded helper binaries individually codesigned with Hardened Runtime codesign --verify --deep --strict passes Submitted via xcrun notarytool submit with --keychain-profile notarytool log returns "not yet available" for both Apple System Status shows all services available
3
1
823
3w
Blending walk and run animations in RealityKit
Hi everybody, I have 2 separate animations run.usdz and walk.usdz animation files which are loaded perfectly in Reality Composer Pro and in the RealityKit application. I want to gradually increase the speed of my player by switching blending weight values from 0.0 (walking) to 1.0 (full speed running). let rabbit = await RabbitBuilder.loadWalkingRabbit() let runningRabbit = await RabbitBuilder.loadRunningRabbit() rabbit.scale = SIMD3(0.05, 0.05, 0.05) runningRabbit.scale = SIMD3(0.05, 0.05, 0.05) let walkAnimation = rabbit.availableAnimations let runAnimation = runningRabbit.availableAnimations RabbitWalker.walkAnim = walkAnimation.first! RabbitWalker.runAnim = runAnimation.first! guard let walk = RabbitWalker.walkAnim, let run = RabbitWalker.runAnim else { return } let blendTree = BlendTreeAnimation<JointTransforms>( BlendTreeBlendNode(sources: [ BlendTreeSourceNode(source: walk.definition, name: "walk", weight: .value(1 - weight)), BlendTreeSourceNode(source: run.definition, name: "run", weight: .value(weight)) ]), name: "rabbitLocomotion", repeatMode: .repeat, offset: TimeInterval(elapsed) ) // I have runtime error after executing this line: "Cannot add incompatible timeline type to blend tree." guard let resource = try? AnimationResource.generate(with: blendTree) else { return } entity.playAnimation(resource) static func loadWalkingRabbit() async -> Entity? { do { let scene = try await Entity(named: "Scene", in: realityKitEnvironmentBundle) guard let rabbit = await scene.findEntity(named: "RabbitWalk") else { return nil } await rabbit.removeFromParent() return rabbit } catch { return nil } } static func loadRunningRabbit() async -> Entity? { do { let scene = try await Entity(named: "Scene", in: realityKitEnvironmentBundle) guard let rabbit = await scene.findEntity(named: "RabbitRun") else { return nil } await rabbit.removeFromParent() return rabbit } catch { return nil } } But when I run this code I have this error; Cannot add incompatible timeline type to blend tree. By the way I have looked to developer's sample codes from here but I couldn't find any relevant BlendTreeAnimation sample which blends 2 animations. I would very happy if someone could direct me to a solution. Regards.
4
0
378
3w
Wrong value for storekit custom purchase link allowed regions entitlement
Greetings fellow devs, After accepting the Alternative Terms Addendum for Apps in the EU and adding the Storekit External Purchases or Offers capability via App Store Connect in our app identifier, the entitlement showing up in xcode is com.apple.developer.storekit.custom-purchase-link.allowed-regions and has the value 'jp'. How can we change the value for that entitlement to 'gr'? We tried changing it in xcode, but we get the error <Provisioning profile "iOS Team Provisioning Profile: [app identifier]" doesn't match the entitlements file's value for the com.apple.developer.storekit.custom-purchase-link.allowed-regions entitlement.>. In Certificates, Identifiers and Profiles in the developer account there is no way to configure that capability. We sent a request to support and they only gave a link to documentation and to the forum here. We have a completed every business agreement requested and we have chosen Greece as the organisation region and the app's availability region wherever possible. We haven't found anywhere that Japan would be chosen to explain the entitlement given. So where can this entitlement about allowed regions be configured? Xcode version is 16.4 and iOS minimum deployments is 18
1
0
156
3w
iOS 26: Interactive sheet dismissal causes layout hitch in underlying SwiftUI view
I’ve been investigating a noticeable animation hitch when interactively dismissing a sheet over a SwiftUI screen with moderate complexity. This was not the case on iOS 18, so I’m curious if others are seeing the same on iOS 26 or have found any mitigations. When dismissing a sheet via the swipe gesture, there’s a visible hitch right after lift-off. The hitch comes from layout work in the underlying view (behind the sheet) The duration scales with the complexity of that view (e.g. number of TextFields/layout nodes) The animation for programmatic dismiss (e.g. tapping a “Done” button) is smooth, although it hangs for a similar amount of time before dismissing, so it appears that the underlying work still happens. SwiftUI is not reevaluating the body during this (validated with Self._printChanges()), so that is not the cause. Using Instruments, the hitch shows up as a layout spike on the main thread: 54ms UIView layoutSublayersOfLayer 54ms └─ _UIHostingView.layoutSubviews 38ms └─ SwiftUI.ViewGraph.updateOutputs 11ms ├─ partial apply for implicit closure #1 in closure #1 │ in closure #1 in Attribute.init<A>(_:) 4ms └─ -[UIView For the same hierarchy with varying complexity: ~3 TextFields in a List: ~25ms (not noticeable) ~20+ TextFields: ~60ms (clearly visible hitch) The same view hierarchy on iOS 18 did not exhibit a visible hitch. I’ve tested this on an iOS 26.4 device and simulator. I’ve also included a minimum reproducible example that illustrates this: struct ContentView: View { @State var showSheet = false var body: some View { NavigationStack { ScrollView { ForEach(0..<120) { _ in RowView() } } .navigationTitle("Repro") .toolbar { ToolbarItem(placement: .topBarTrailing) { Button("Present") { showSheet = true } } } .sheet(isPresented: $showSheet) { PresentedSheet() } } } } struct RowView: View { @State var first = "" @State var second = "" var body: some View { VStack(alignment: .leading, spacing: 12) { Text("Row") .font(.headline) HStack(spacing: 12) { TextField("First", text: $first) .textFieldStyle(.roundedBorder) TextField("Second", text: $second) .textFieldStyle(.roundedBorder) } HStack(spacing: 12) { Text("Third") Text("Fourth") Image(systemName: "chevron.right") } } } } struct PresentedSheet: View { @Environment(\.dismiss) private var dismiss var body: some View { NavigationStack { List {} .navigationTitle("Swipe To Dismiss Me") .toolbar { ToolbarItem(placement: .topBarTrailing) { Button("Done") { dismiss() } } } } } } Is anyone else experiencing this and have any mitigations been found beyond reducing view complexity? I’ve filed a feedback report under FB22501630.
1
0
241
4w
Technical scope of Default Dialer App in EU: Access to Cellular Audio Stream for AI Services
Hello, I am researching the technical feasibility of developing a Default Dialer App for the EU market using the specific entitlements granted under the Digital Markets Act (DMA). Our primary goal is to implement a Cellular/VoLTE-based calling system—not mVoIP—and we need to clarify whether it is possible to provide features such as STT (Speech-to-Text) and Call Summarization, which require In-call Audio Recording. Regarding the Default Dialer App Entitlement in the EU, I would like to clarify the following: Access to Raw Audio Stream: When an app is granted the Default Dialer status in the EU, does it gain programmatic access to the downlink and uplink audio streams of a cellular/VoLTE call for recording purposes? LiveCommunicationKit & Recording APIs: Does LiveCommunicationKit (or any related framework for iOS 26) provide specific APIs for a third-party dialer to capture native telephony audio? Entitlement Scope for Partners: If an EU-based partner obtains the necessary entitlements, can those entitlements be used to grant our application the authority to process cellular calls and access the associated audio data? Recommended Implementation: Are there any Apple-sanctioned methods or specific frameworks for implementing call recording for AI-driven services within the scope of the new EU-specific regulations? We need to confirm these technical boundaries to establish the implementation scope with our EU partners. Any guidance on whether a third-party app can technically and legally record cellular calls under these specific conditions would be greatly appreciated. Thank you. Access to Raw Audio Stream: When an app is granted the Default Dialer status in the EU, does it gain programmatic access to the downlink/uplink audio streams of a cellular call for recording purposes? LiveCommunicationKit & Recording: Does LiveCommunicationKit provide any specific APIs or delegates that allow a third-party dialer to capture call audio, or is the recording still restricted by the system’s sandbox? Entitlement Scope: If our EU partner obtains the necessary entitlements, can they authorize our application to handle the cellular call processing entirely, including the access to telephony audio data? AI Service Implementation: Are there any Apple-recommended ways to implement AI features (STT, Summarization) within a Default Dialer App without violating current iOS security architectures? We need to provide a clear "Feasibility Report" to our EU partners during upcoming meetings. Any technical guidance on whether a third-party app can legally and technically record cellular calls under this new EU-specific policy would be extremely helpful. Thank you.
2
0
103
3w
EASession(accessory:forProtocol:) always returns nil — MFI accessory iAP2
EASession(accessory:forProtocol:) always returns nil — MFI accessory iAP2 Platform: iOS 17+ | Hardware: Custom MFI-certified accessory (USB-C, iAP2) | Language: Swift Problem We have a custom MFI-certified accessory communicating over USB-C using ExternalAccessory. The app calls EASession(accessory:forProtocol:) after receiving EAAccessoryDidConnect but it always returns nil. We never get past session creation. What we have verified We captured a sysdiagnose on-device and analysed the accessoryd-packets log. The full iAP2 handshake completes successfully at the OS level: USB attach succeeds MFI auth certificate is present and Apple-issued Auth challenge and response complete successfully IdentificationInformation is accepted by iOS — protocol string and Team ID are correct EAAccessoryDidConnect fires as expected iOS sends StartExternalAccessoryProtocolSession — the OS-level session is established So the hardware, MFI auth, protocol string, and Team ID are all correct. Despite this, EASession(accessory:forProtocol:) returns nil in the app. We also confirmed: Protocol string in UISupportedExternalAccessoryProtocols in Info.plist matches the accessory exactly Protocol string in code matches Info.plist App entitlements are correctly configured EAAccessoryManager.shared().registerForLocalNotifications() is called before connection Current connection code @objc private func accessoryDidConnect(_ notification: Notification) { guard let accessory = notification.userInfo?[EAAccessoryKey] as? EAAccessory else { return } DispatchQueue.main.asyncAfter(deadline: .now() + 1.0) { self.tryConnectToAccessory() } } private func tryConnectToAccessory() { DispatchQueue.main.asyncAfter(deadline: .now() + 3.0) { for accessory in EAAccessoryManager.shared().connectedAccessories { let session = EASession(accessory: accessory, forProtocol: "") // session is always nil here } } } Questions The packet log shows a ~4 second gap between EAAccessoryDidConnect firing and iOS internally completing session readiness (StartExternalAccessoryProtocolSession). Is there a reliable way to know when iOS Is it actually ready to grant an EASession, rather than using a fixed delay? Is there a delegate callback or notification that fires when the accessory protocol session is ready to be opened, rather than relying on EAAccessoryDidConnect + an arbitrary delay? Are there any known conditions on iOS 17+ under which EASession returns nil even though the iAP2 handshake completed successfully at the OS level? Is retrying EASession after a nil result a supported pattern, or does a nil result mean the session will never succeed for that connection? Any guidance appreciated.
8
0
532
4w
Apple Developer Account permanent Termination/Deletion request
I am requesting full deletion and termination of my apple developer account and all data and revocation of the Apple Developer Agreement. I am revoking Apple Developer Agreement. This should be irreversible. I have already tried to contact apple developer support few times and the request just gets ignored and closed. The developer account is associated with my Apple ID that I am on forums with. Thanks.
9
0
307
3w
Lookify: AI Virtual Try-On — Stuck in "Waiting for Review" | 2 Months
Hello Apple Developer Community and App Review Team, I'm writing to seek guidance regarding my app Lookify: AI Virtual Try-On (App ID: 6757718224), which has been caught in an ongoing review cycle since February 15, 2026 — nearly two months ago. Submission History: Date Version Status Feb 15 iOS 1.1.0 Removed Feb 19 iOS 1.1.0 Removed Feb 21 iOS 1.1.0 Removed Apr 3 (2:21 AM) iOS 1.1.0 Removed Apr 3 (1:17 PM) iOS 1.1.0 Removed Apr 6 (current) iOS 1.1.0 Waiting for Review Each submission was either self-removed after extended waiting periods with no reviewer feedback, or removed to address potential issues — only to re-enter the queue with the same outcome. The current submission has now been in "Waiting for Review" status since April 6 with no activity, no messages, and no indication of progress. What I've done to comply: Updated the Privacy Policy to be fully GDPR and KVKK compliant Provided clear demo account credentials and usage instructions for the AI try-on feature Ensured all metadata, screenshots, and descriptions accurately reflect the app's functionality Reviewed Apple's App Review Guidelines thoroughly before each resubmission I understand that AI-powered apps — especially those involving visual try-on technology — may require closer scrutiny, and I fully respect that process. I'm not asking to bypass any review step. I simply ask for transparency: if there is an issue with the app, a rejection with specific feedback would allow me to address it immediately. This app represents months of development work. As a small independent developer, prolonged uncertainty without communication makes it very difficult to plan or improve. My request: Could anyone from the App Review team or community provide insight into: Whether there is an active flag or concern on this submission What the expected timeline might be for accounts with this submission history Whether an Expedited Review would be appropriate given this timeline I have also submitted a contact request through the official App Review contact form. I am fully committed to making any necessary changes — I just need to know what they are. Thank you sincerely for your time and assistance. Mustafa Bilgiç Developer, PlayTools
2
1
232
4w
AlarmKit alerting-phase playback is significantly quieter than equivalent in-app playback using AVAudioSession(.playback)
Hi all, I’m trying to determine whether the loudness gap I’m seeing between AlarmKit alert playback and normal app-managed playback is expected behavior, a sound-asset issue, or something that should be reported as a bug. Observed behavior When an alarm fires through AlarmKit while the device is locked, the alarm sound is significantly quieter than playback of the same or very similar audio once the app is active and using its own audio session. The difference is large enough that it does not feel like a small mastering difference. It feels like the AlarmKit / system alerting path is using a meaningfully lower effective output level than normal app playback. Test scenario My repro is roughly: Schedule an alarm with AlarmKit. Lock the device. Let the alarm fire and listen during the system alerting phase. Enter the app / continue into the app-driven alarm experience. Play the same or equivalent alarm asset via app-managed playback. Result: AlarmKit / lock-screen alerting phase sounds much quieter. In-app playback sounds noticeably louder and fuller on the same device. Current implementation Alarm flow is currently split into two paths: 1) System alarm path Alarm scheduling and alert surfacing via AlarmKit Device may be locked No attempt to manipulate system volume No private APIs 2) In-app playback path After app activation, playback uses: AVAudioSession category .playback AVAudioPlayer Audio is routed as normal app playback This path sounds substantially louder than the AlarmKit path Important detail I am not asking how to override system volume. I understand that AlarmKit appears to follow the system ringer / alert volume model and does not expose a public API for custom alarm loudness. My question is narrower: Is it expected that the same asset or an equivalent asset will sound materially quieter during the AlarmKit alerting phase than during ordinary app playback with AVAudioSession(category: .playback)? Questions Is the lower perceived loudness during AlarmKit alerting an expected property of the framework / system alarm path? Does AlarmKit playback use a different output path, gain policy, processing chain, or speaker treatment than normal app playback with .playback? Are there recommended authoring constraints for AlarmKit alarm sounds to maximize perceived loudness on iPhone speakers? transient-heavy mix stronger mids reduced low-end different LUFS / peak strategy shorter attack, etc. Has anyone measured this directly with: the same WAV / CAF file same device same system volume locked AlarmKit playback vs unlocked in-app playback If this is not expected, would Apple want this reported as a bug with: sample project exact iOS version device model screen recording / audio recording What I’m trying to figure out For alarm-app UX, this matters a lot because: AlarmKit is the most reliable lock-screen/system path. But if AlarmKit playback is substantially quieter than normal app playback, the alarm experience is inconsistent depending on device/app state. That makes it hard to know whether to treat this as: expected system behavior, a framework limitation, an asset/mastering problem, or a bug. If anyone has tested this in a controlled way or received guidance from Apple/DTS, I’d appreciate any technical detail. Thanks.
2
0
258
3w
HomeKit support on MacOS
I am currently developing an app for MacOS that needs to control HomeKit devices like lights. However, it seems like MacOS is supported on the official documentation, but not when I try to create an app ID on developer.apple.com. On the link https://developer.apple.com/apple-home/, MacOS is clearly showed as supported for MacOS. But when I try to create an app ID, it shows that it is only compatible for iOS, VisionOS and WatchOS. Could this be clarified? Best regards, orangeidle25
2
0
285
4w
Live Activity Stops Updating After 30 Seconds in Background During Audio Playback
Hi I developed a music app that plays offline audio and displays lyrics using Live Activities. According to ActivityKit documentation, Live Activities can be updated from the background. However, in my case, updates stop after ~30 seconds when the app goes to the background or the device is locked. Important points: The app continues running in the background (audio playback works fine using AVAudioSession with .playback) Background code execution is working as expected Only the Live Activity stops updating I am not using push updates since this is an offline app. Is there any limitation or requirement for updating Live Activities continuously in the background during audio playback? Audio Session Configuration let session = AVAudioSession.sharedInstance() try session.setCategory( .playback, mode: .default, options: [.mixWithOthers] // ✅ DO NOT interrupt other audio ) try session.setActive(true) print("✅ [AudioSession] Activated with mixWithOthers") } catch { print("❌ [AudioSession] Error: \(error)") } Live Activity Update Methods guard let activity = getLiveActivity(for: recordID) else{ print("⚠️ No Live Activity found for recordID: \(recordID)") return } guard activity.activityState == .active else { print("⚠️ Activity is not active") return } Task { let content = ActivityContent( state: state, staleDate: Date().addingTimeInterval(60 * 60 * 12), relevanceScore: 1.0 ) await activity.update(content) print("✅ Live Activity updated with ActivityContent") } }
0
0
271
3w
UITextField and UITextView abnormally popped up the network permission application interface
in iOS26.4, after installing the app for the first time, opening the app and clicking on the UITextField input box will trigger the system to pop up the network permission application interface. This issue did not exist before iOS 26.3, only in iOS 26.4. This is a fatal bug where the network permission request box should not pop up when the developer has not called the network related API.
Replies
4
Boosts
0
Views
433
Activity
4w
Approval of first in-App purchase
I am looking for some insight and an assist (planning to test in-app purchase with TestFlight) - all works in Xcode. I have created my very first in-App purchases App (I have other paid apps in App store) Under Distribution tab in the Monetization section i created the in-app purchases item. I completed all fields and the status is now changed from waiting fior metadata to ready to submit.. Per my readings the in-app purchase item would be reviewed with a new version of the App. I uploaded a new version, added sandbox tester and submitted it for review.. My App in TestFlight now shows status of Testing, but the in-App purchase still shows waiting for review. On the Distribution tab the "Add for Review" button is active but it seems a bit illogical for me to click it as the in-App purchases is the only completed item on my App's distribution tab (I am not ready to submit for review yet..) So as illogical as it appears, is this the way to get your first in-App purchase approved?
Replies
1
Boosts
0
Views
72
Activity
3w
A new coder wanting to learn
Hi guys, I just joined the dev program, and I am a new coder. I started using swift playground and I am loving it! Is there any other apps that help me to learn coding? Would love to hear some stuff. Thanks.
Replies
7
Boosts
0
Views
275
Activity
4w
Apple Watch companion app keeps uninstalling
Hello, I’m developing an Apple Watch companion app for my swimming application and the app keeps uninstalling/disappearing from Apple Watch. I have a specific Scheme to install it to my watches, it appears there, I can debug but after a while it disappears. It’s my first app for this device but it doesn’t seem normal to me. any idea?
Replies
1
Boosts
0
Views
71
Activity
3w
Waiting for review
Hi, My app is waiting for review since Saturday at 5:42 PM, so is TestFlight. It has been rejected a couple of times, and errors were fixed with each update. Is this normal?
Replies
1
Boosts
0
Views
120
Activity
4w
What is the optimal number of records per shard?
Hello, I am currently developing a PIR server using the pir-server-example repository. We are anticipating a total of 10 million URLs for our dataset. In this context, what would be the optimal shard size (number of records per shard) to balance computational latency and communication overhead? Any advice or best practices for handling a dataset of this scale would be greatly appreciated. Thank you.
Replies
2
Boosts
0
Views
214
Activity
3w
Is continuous background GPS tracing during device idle allowed?
We want to implement continuous GPS tracking in a React Native iOS app for security purposes. We need the tracing in the following case scenarios: App is Terminated App is minimised (Not killed) App is open and device is put to sleep mode #Locked App is in minimised and device is put to sleep mode #Locked (sleep mode) Currently it works in following 2 scenarios: Working when the app open in foreground Works when the app is killed (Traces in background) We would like to understand: Is continuous background location tracking during device idle allowed in iOS ? If allowed, what is the recommended approach to ensure reliable tracking? Are there any specific configurations, permissions, or limitations (battery optimization, system restrictions) we should be aware of? We are using React-Native by transistersoft with background location updates enabled and required permissions configured. This use case is specifically for user safety and security tracking. Any guidance on best practices and platform limitations would be helpful.
Replies
1
Boosts
0
Views
276
Activity
4w
URL Filter Network Extension
Hello team, I am trying to find out a way to block urls in the chrome browser if it is found in local blocked list cache. I found URL Filter Network very much suitable for my requirement. But I see at multiple places that this solution is only for Enterprise level or MDM or supervised device. So can I run this for normal user ? as my targeting audience would be bank users. One more thing how can I test this in development environment if we need supervised devices and do we need special entitlement ? When trying to run sample project in the simulator then getting below error
Replies
15
Boosts
0
Views
529
Activity
3w
Xcode 26 with simulator 16 ?
mac mini xcode 26.4, but the simulator version is 16. how can i update the simulator to version 26?
Replies
2
Boosts
0
Views
147
Activity
4w
First-time notarization submissions stuck "In Progress" — two submissions, 15+ hours
This is my first time submitting an app for notarization. Both submissions have been stuck "In Progress" with no logs available. Body: This is my first time submitting an app for notarization. Both submissions have been stuck "In Progress" with no logs available. Submission 1: ID: 43ea68c1-5291-42c6-b0e1-3cacab4ca01a Submitted: 2026-04-09T02:05:34Z Status: In Progress (15+ hours) Submission 2: ID: 12ea49a0-64cf-495e-af7e-9aad5aabe30f Submitted: 2026-04-09T17:06:51Z Status: In Progress (1+ hour) Details: Team ID: PWTWN9N25D App: Native macOS SwiftUI app (arm64), ~84 MB zipped Signed with Developer ID Application certificate, Hardened Runtime enabled All embedded helper binaries individually codesigned with Hardened Runtime codesign --verify --deep --strict passes Submitted via xcrun notarytool submit with --keychain-profile notarytool log returns "not yet available" for both Apple System Status shows all services available
Replies
3
Boosts
1
Views
823
Activity
3w
Blending walk and run animations in RealityKit
Hi everybody, I have 2 separate animations run.usdz and walk.usdz animation files which are loaded perfectly in Reality Composer Pro and in the RealityKit application. I want to gradually increase the speed of my player by switching blending weight values from 0.0 (walking) to 1.0 (full speed running). let rabbit = await RabbitBuilder.loadWalkingRabbit() let runningRabbit = await RabbitBuilder.loadRunningRabbit() rabbit.scale = SIMD3(0.05, 0.05, 0.05) runningRabbit.scale = SIMD3(0.05, 0.05, 0.05) let walkAnimation = rabbit.availableAnimations let runAnimation = runningRabbit.availableAnimations RabbitWalker.walkAnim = walkAnimation.first! RabbitWalker.runAnim = runAnimation.first! guard let walk = RabbitWalker.walkAnim, let run = RabbitWalker.runAnim else { return } let blendTree = BlendTreeAnimation<JointTransforms>( BlendTreeBlendNode(sources: [ BlendTreeSourceNode(source: walk.definition, name: "walk", weight: .value(1 - weight)), BlendTreeSourceNode(source: run.definition, name: "run", weight: .value(weight)) ]), name: "rabbitLocomotion", repeatMode: .repeat, offset: TimeInterval(elapsed) ) // I have runtime error after executing this line: "Cannot add incompatible timeline type to blend tree." guard let resource = try? AnimationResource.generate(with: blendTree) else { return } entity.playAnimation(resource) static func loadWalkingRabbit() async -> Entity? { do { let scene = try await Entity(named: "Scene", in: realityKitEnvironmentBundle) guard let rabbit = await scene.findEntity(named: "RabbitWalk") else { return nil } await rabbit.removeFromParent() return rabbit } catch { return nil } } static func loadRunningRabbit() async -> Entity? { do { let scene = try await Entity(named: "Scene", in: realityKitEnvironmentBundle) guard let rabbit = await scene.findEntity(named: "RabbitRun") else { return nil } await rabbit.removeFromParent() return rabbit } catch { return nil } } But when I run this code I have this error; Cannot add incompatible timeline type to blend tree. By the way I have looked to developer's sample codes from here but I couldn't find any relevant BlendTreeAnimation sample which blends 2 animations. I would very happy if someone could direct me to a solution. Regards.
Replies
4
Boosts
0
Views
378
Activity
3w
Wrong value for storekit custom purchase link allowed regions entitlement
Greetings fellow devs, After accepting the Alternative Terms Addendum for Apps in the EU and adding the Storekit External Purchases or Offers capability via App Store Connect in our app identifier, the entitlement showing up in xcode is com.apple.developer.storekit.custom-purchase-link.allowed-regions and has the value 'jp'. How can we change the value for that entitlement to 'gr'? We tried changing it in xcode, but we get the error <Provisioning profile "iOS Team Provisioning Profile: [app identifier]" doesn't match the entitlements file's value for the com.apple.developer.storekit.custom-purchase-link.allowed-regions entitlement.>. In Certificates, Identifiers and Profiles in the developer account there is no way to configure that capability. We sent a request to support and they only gave a link to documentation and to the forum here. We have a completed every business agreement requested and we have chosen Greece as the organisation region and the app's availability region wherever possible. We haven't found anywhere that Japan would be chosen to explain the entitlement given. So where can this entitlement about allowed regions be configured? Xcode version is 16.4 and iOS minimum deployments is 18
Replies
1
Boosts
0
Views
156
Activity
3w
iOS 26: Interactive sheet dismissal causes layout hitch in underlying SwiftUI view
I’ve been investigating a noticeable animation hitch when interactively dismissing a sheet over a SwiftUI screen with moderate complexity. This was not the case on iOS 18, so I’m curious if others are seeing the same on iOS 26 or have found any mitigations. When dismissing a sheet via the swipe gesture, there’s a visible hitch right after lift-off. The hitch comes from layout work in the underlying view (behind the sheet) The duration scales with the complexity of that view (e.g. number of TextFields/layout nodes) The animation for programmatic dismiss (e.g. tapping a “Done” button) is smooth, although it hangs for a similar amount of time before dismissing, so it appears that the underlying work still happens. SwiftUI is not reevaluating the body during this (validated with Self._printChanges()), so that is not the cause. Using Instruments, the hitch shows up as a layout spike on the main thread: 54ms UIView layoutSublayersOfLayer 54ms └─ _UIHostingView.layoutSubviews 38ms └─ SwiftUI.ViewGraph.updateOutputs 11ms ├─ partial apply for implicit closure #1 in closure #1 │ in closure #1 in Attribute.init<A>(_:) 4ms └─ -[UIView For the same hierarchy with varying complexity: ~3 TextFields in a List: ~25ms (not noticeable) ~20+ TextFields: ~60ms (clearly visible hitch) The same view hierarchy on iOS 18 did not exhibit a visible hitch. I’ve tested this on an iOS 26.4 device and simulator. I’ve also included a minimum reproducible example that illustrates this: struct ContentView: View { @State var showSheet = false var body: some View { NavigationStack { ScrollView { ForEach(0..<120) { _ in RowView() } } .navigationTitle("Repro") .toolbar { ToolbarItem(placement: .topBarTrailing) { Button("Present") { showSheet = true } } } .sheet(isPresented: $showSheet) { PresentedSheet() } } } } struct RowView: View { @State var first = "" @State var second = "" var body: some View { VStack(alignment: .leading, spacing: 12) { Text("Row") .font(.headline) HStack(spacing: 12) { TextField("First", text: $first) .textFieldStyle(.roundedBorder) TextField("Second", text: $second) .textFieldStyle(.roundedBorder) } HStack(spacing: 12) { Text("Third") Text("Fourth") Image(systemName: "chevron.right") } } } } struct PresentedSheet: View { @Environment(\.dismiss) private var dismiss var body: some View { NavigationStack { List {} .navigationTitle("Swipe To Dismiss Me") .toolbar { ToolbarItem(placement: .topBarTrailing) { Button("Done") { dismiss() } } } } } } Is anyone else experiencing this and have any mitigations been found beyond reducing view complexity? I’ve filed a feedback report under FB22501630.
Replies
1
Boosts
0
Views
241
Activity
4w
Technical scope of Default Dialer App in EU: Access to Cellular Audio Stream for AI Services
Hello, I am researching the technical feasibility of developing a Default Dialer App for the EU market using the specific entitlements granted under the Digital Markets Act (DMA). Our primary goal is to implement a Cellular/VoLTE-based calling system—not mVoIP—and we need to clarify whether it is possible to provide features such as STT (Speech-to-Text) and Call Summarization, which require In-call Audio Recording. Regarding the Default Dialer App Entitlement in the EU, I would like to clarify the following: Access to Raw Audio Stream: When an app is granted the Default Dialer status in the EU, does it gain programmatic access to the downlink and uplink audio streams of a cellular/VoLTE call for recording purposes? LiveCommunicationKit & Recording APIs: Does LiveCommunicationKit (or any related framework for iOS 26) provide specific APIs for a third-party dialer to capture native telephony audio? Entitlement Scope for Partners: If an EU-based partner obtains the necessary entitlements, can those entitlements be used to grant our application the authority to process cellular calls and access the associated audio data? Recommended Implementation: Are there any Apple-sanctioned methods or specific frameworks for implementing call recording for AI-driven services within the scope of the new EU-specific regulations? We need to confirm these technical boundaries to establish the implementation scope with our EU partners. Any guidance on whether a third-party app can technically and legally record cellular calls under these specific conditions would be greatly appreciated. Thank you. Access to Raw Audio Stream: When an app is granted the Default Dialer status in the EU, does it gain programmatic access to the downlink/uplink audio streams of a cellular call for recording purposes? LiveCommunicationKit & Recording: Does LiveCommunicationKit provide any specific APIs or delegates that allow a third-party dialer to capture call audio, or is the recording still restricted by the system’s sandbox? Entitlement Scope: If our EU partner obtains the necessary entitlements, can they authorize our application to handle the cellular call processing entirely, including the access to telephony audio data? AI Service Implementation: Are there any Apple-recommended ways to implement AI features (STT, Summarization) within a Default Dialer App without violating current iOS security architectures? We need to provide a clear "Feasibility Report" to our EU partners during upcoming meetings. Any technical guidance on whether a third-party app can legally and technically record cellular calls under this new EU-specific policy would be extremely helpful. Thank you.
Replies
2
Boosts
0
Views
103
Activity
3w
EASession(accessory:forProtocol:) always returns nil — MFI accessory iAP2
EASession(accessory:forProtocol:) always returns nil — MFI accessory iAP2 Platform: iOS 17+ | Hardware: Custom MFI-certified accessory (USB-C, iAP2) | Language: Swift Problem We have a custom MFI-certified accessory communicating over USB-C using ExternalAccessory. The app calls EASession(accessory:forProtocol:) after receiving EAAccessoryDidConnect but it always returns nil. We never get past session creation. What we have verified We captured a sysdiagnose on-device and analysed the accessoryd-packets log. The full iAP2 handshake completes successfully at the OS level: USB attach succeeds MFI auth certificate is present and Apple-issued Auth challenge and response complete successfully IdentificationInformation is accepted by iOS — protocol string and Team ID are correct EAAccessoryDidConnect fires as expected iOS sends StartExternalAccessoryProtocolSession — the OS-level session is established So the hardware, MFI auth, protocol string, and Team ID are all correct. Despite this, EASession(accessory:forProtocol:) returns nil in the app. We also confirmed: Protocol string in UISupportedExternalAccessoryProtocols in Info.plist matches the accessory exactly Protocol string in code matches Info.plist App entitlements are correctly configured EAAccessoryManager.shared().registerForLocalNotifications() is called before connection Current connection code @objc private func accessoryDidConnect(_ notification: Notification) { guard let accessory = notification.userInfo?[EAAccessoryKey] as? EAAccessory else { return } DispatchQueue.main.asyncAfter(deadline: .now() + 1.0) { self.tryConnectToAccessory() } } private func tryConnectToAccessory() { DispatchQueue.main.asyncAfter(deadline: .now() + 3.0) { for accessory in EAAccessoryManager.shared().connectedAccessories { let session = EASession(accessory: accessory, forProtocol: "") // session is always nil here } } } Questions The packet log shows a ~4 second gap between EAAccessoryDidConnect firing and iOS internally completing session readiness (StartExternalAccessoryProtocolSession). Is there a reliable way to know when iOS Is it actually ready to grant an EASession, rather than using a fixed delay? Is there a delegate callback or notification that fires when the accessory protocol session is ready to be opened, rather than relying on EAAccessoryDidConnect + an arbitrary delay? Are there any known conditions on iOS 17+ under which EASession returns nil even though the iAP2 handshake completed successfully at the OS level? Is retrying EASession after a nil result a supported pattern, or does a nil result mean the session will never succeed for that connection? Any guidance appreciated.
Replies
8
Boosts
0
Views
532
Activity
4w
Apple Developer Account permanent Termination/Deletion request
I am requesting full deletion and termination of my apple developer account and all data and revocation of the Apple Developer Agreement. I am revoking Apple Developer Agreement. This should be irreversible. I have already tried to contact apple developer support few times and the request just gets ignored and closed. The developer account is associated with my Apple ID that I am on forums with. Thanks.
Replies
9
Boosts
0
Views
307
Activity
3w
Lookify: AI Virtual Try-On — Stuck in "Waiting for Review" | 2 Months
Hello Apple Developer Community and App Review Team, I'm writing to seek guidance regarding my app Lookify: AI Virtual Try-On (App ID: 6757718224), which has been caught in an ongoing review cycle since February 15, 2026 — nearly two months ago. Submission History: Date Version Status Feb 15 iOS 1.1.0 Removed Feb 19 iOS 1.1.0 Removed Feb 21 iOS 1.1.0 Removed Apr 3 (2:21 AM) iOS 1.1.0 Removed Apr 3 (1:17 PM) iOS 1.1.0 Removed Apr 6 (current) iOS 1.1.0 Waiting for Review Each submission was either self-removed after extended waiting periods with no reviewer feedback, or removed to address potential issues — only to re-enter the queue with the same outcome. The current submission has now been in "Waiting for Review" status since April 6 with no activity, no messages, and no indication of progress. What I've done to comply: Updated the Privacy Policy to be fully GDPR and KVKK compliant Provided clear demo account credentials and usage instructions for the AI try-on feature Ensured all metadata, screenshots, and descriptions accurately reflect the app's functionality Reviewed Apple's App Review Guidelines thoroughly before each resubmission I understand that AI-powered apps — especially those involving visual try-on technology — may require closer scrutiny, and I fully respect that process. I'm not asking to bypass any review step. I simply ask for transparency: if there is an issue with the app, a rejection with specific feedback would allow me to address it immediately. This app represents months of development work. As a small independent developer, prolonged uncertainty without communication makes it very difficult to plan or improve. My request: Could anyone from the App Review team or community provide insight into: Whether there is an active flag or concern on this submission What the expected timeline might be for accounts with this submission history Whether an Expedited Review would be appropriate given this timeline I have also submitted a contact request through the official App Review contact form. I am fully committed to making any necessary changes — I just need to know what they are. Thank you sincerely for your time and assistance. Mustafa Bilgiç Developer, PlayTools
Replies
2
Boosts
1
Views
232
Activity
4w
AlarmKit alerting-phase playback is significantly quieter than equivalent in-app playback using AVAudioSession(.playback)
Hi all, I’m trying to determine whether the loudness gap I’m seeing between AlarmKit alert playback and normal app-managed playback is expected behavior, a sound-asset issue, or something that should be reported as a bug. Observed behavior When an alarm fires through AlarmKit while the device is locked, the alarm sound is significantly quieter than playback of the same or very similar audio once the app is active and using its own audio session. The difference is large enough that it does not feel like a small mastering difference. It feels like the AlarmKit / system alerting path is using a meaningfully lower effective output level than normal app playback. Test scenario My repro is roughly: Schedule an alarm with AlarmKit. Lock the device. Let the alarm fire and listen during the system alerting phase. Enter the app / continue into the app-driven alarm experience. Play the same or equivalent alarm asset via app-managed playback. Result: AlarmKit / lock-screen alerting phase sounds much quieter. In-app playback sounds noticeably louder and fuller on the same device. Current implementation Alarm flow is currently split into two paths: 1) System alarm path Alarm scheduling and alert surfacing via AlarmKit Device may be locked No attempt to manipulate system volume No private APIs 2) In-app playback path After app activation, playback uses: AVAudioSession category .playback AVAudioPlayer Audio is routed as normal app playback This path sounds substantially louder than the AlarmKit path Important detail I am not asking how to override system volume. I understand that AlarmKit appears to follow the system ringer / alert volume model and does not expose a public API for custom alarm loudness. My question is narrower: Is it expected that the same asset or an equivalent asset will sound materially quieter during the AlarmKit alerting phase than during ordinary app playback with AVAudioSession(category: .playback)? Questions Is the lower perceived loudness during AlarmKit alerting an expected property of the framework / system alarm path? Does AlarmKit playback use a different output path, gain policy, processing chain, or speaker treatment than normal app playback with .playback? Are there recommended authoring constraints for AlarmKit alarm sounds to maximize perceived loudness on iPhone speakers? transient-heavy mix stronger mids reduced low-end different LUFS / peak strategy shorter attack, etc. Has anyone measured this directly with: the same WAV / CAF file same device same system volume locked AlarmKit playback vs unlocked in-app playback If this is not expected, would Apple want this reported as a bug with: sample project exact iOS version device model screen recording / audio recording What I’m trying to figure out For alarm-app UX, this matters a lot because: AlarmKit is the most reliable lock-screen/system path. But if AlarmKit playback is substantially quieter than normal app playback, the alarm experience is inconsistent depending on device/app state. That makes it hard to know whether to treat this as: expected system behavior, a framework limitation, an asset/mastering problem, or a bug. If anyone has tested this in a controlled way or received guidance from Apple/DTS, I’d appreciate any technical detail. Thanks.
Replies
2
Boosts
0
Views
258
Activity
3w
HomeKit support on MacOS
I am currently developing an app for MacOS that needs to control HomeKit devices like lights. However, it seems like MacOS is supported on the official documentation, but not when I try to create an app ID on developer.apple.com. On the link https://developer.apple.com/apple-home/, MacOS is clearly showed as supported for MacOS. But when I try to create an app ID, it shows that it is only compatible for iOS, VisionOS and WatchOS. Could this be clarified? Best regards, orangeidle25
Replies
2
Boosts
0
Views
285
Activity
4w
Live Activity Stops Updating After 30 Seconds in Background During Audio Playback
Hi I developed a music app that plays offline audio and displays lyrics using Live Activities. According to ActivityKit documentation, Live Activities can be updated from the background. However, in my case, updates stop after ~30 seconds when the app goes to the background or the device is locked. Important points: The app continues running in the background (audio playback works fine using AVAudioSession with .playback) Background code execution is working as expected Only the Live Activity stops updating I am not using push updates since this is an offline app. Is there any limitation or requirement for updating Live Activities continuously in the background during audio playback? Audio Session Configuration let session = AVAudioSession.sharedInstance() try session.setCategory( .playback, mode: .default, options: [.mixWithOthers] // ✅ DO NOT interrupt other audio ) try session.setActive(true) print("✅ [AudioSession] Activated with mixWithOthers") } catch { print("❌ [AudioSession] Error: \(error)") } Live Activity Update Methods guard let activity = getLiveActivity(for: recordID) else{ print("⚠️ No Live Activity found for recordID: \(recordID)") return } guard activity.activityState == .active else { print("⚠️ Activity is not active") return } Task { let content = ActivityContent( state: state, staleDate: Date().addingTimeInterval(60 * 60 * 12), relevanceScore: 1.0 ) await activity.update(content) print("✅ Live Activity updated with ActivityContent") } }
Replies
0
Boosts
0
Views
271
Activity
3w