Improving Performance | Documentation - Roblox Creator Hub (2024)

This page describes common performance problems and best practices for mitigating them.

Script Computation

Expensive operations in Lua code take longer to process and can thus impact frame rate. Unless it is being executed in parallel, Lua code runs synchronously and blocks the main thread until it encounters a function that yields the thread.

Common Problems

  • Intensive operations on table structures - Complex operations such asserialization, deserialization, and deep cloning incur a high performancecost, especially on large table structures. This is particularly true if theseoperations are recursive or involve iterating over very large data structures.

  • High frequency events - Tying expensive operations to frame-based eventsof RunService without limiting the frequency means these operationsare repeated every frame, which often results in an unnecessary increase incomputation time. These events include:

    • RunService.PreAnimation

    • RunService.PreRender

    • RunService.PreSimulation

    • RunService.PostSimulation

    • RunService.Heartbeat


  • Invoke code on RunService events sparingly, limiting usage to caseswhere high frequency invocation is essential (for example, updating thecamera). You can execute most other code in other events or less frequently ina loop.

  • Break up large or expensive tasks using task.wait() to spread thework across multiple frames.

  • Identify and optimize unnecessarily expensive operations and usemultithreading for computationallyexpensive tasks that don't need to access the data model.

  • Certain server-side scripts can benefit from Native Code Generation, a simple flag that compiles a script to machine code rather than bytecode.

MicroProfiler Scopes

ScopeAssociated Computation
RunService.PreRenderCode executing on the PreRender event
RunService.PreSimulationCode executing on the Stepped event
RunService.PostSimulationCode executing on Heartbeat event
RunService.HeartbeatCode executing on Heartbeat event

For more information on debugging scripts using the MicroProfiler, see thedebug library, which includes functions for tagging specific code andfurther increasing specificity, such as debug.profilebegin anddebug.profileend. Many Roblox API methods called by scripts also havetheir own associated MicroProfiler tags that can provide useful signal.

Script Memory Usage

Memory leaks can occur when you write scripts that consume memory that thegarbage collector can't properly release when its no longer in use. Leaks arespecifically pervasive on the server, because they can continuously be onlinefor many days, whereas a client session is much shorter.

The following memory values in the Developer Console can indicate a problem that needs further investigation:

  • LuaHeap - High or growing consumption suggests a memory leak.

  • InstanceCount - Consistently growing numbers of instances suggest references to some instances in your code are not being garbage collected.

  • PlaceScriptMemory - Provides a script by script breakdown of memory usage.

Common Problems

  • Leaving connections connected - The engine never garbage collects events connected to an instance and any values referenced inside the connected callback. Therefore, active connections of events and code inside the connected instances, connected functions, and referenced values, are out of scope for the memory garbage collector, even after the events are fired.

    Although events are disconnected when the instance they belong to isdestroyed, a common mistake is to assume this applies to Playerobjects. After a user leaves an experience, the engine doesn't automaticallydestroy their representative Player object and character model, soconnections to the Player object and instances under the charactermodel, such as Player.CharacterAdded, still consume memory if you don'tdisconnect them in your scripts. This can result in very significant memoryleaks over time on the server as hundreds of users join and leave theexperience.

  • Tables - Inserting objects into tables but not removing them when they areno longer needed causes unnecessary memory consumption, especially for tablesthat track user data when they join. For example, the following code sample creates a table adding user information each time a user joins:


    local playerInfo = {}


    playerInfo[player] = {} -- some info


    If you don't remove these entries when they are no longer needed, the table continues to grow in size and consumes more memory as more users join the session. Any code that iterates over this table also becomes more computationally expensive as the table grows in size.


To clean up all used values for preventing memory leaks:

Physics Computation

Excessive physics simulation can be a key cause of increased computation timeper frame on both the server and the client.

Common Problems

  • Excessive physics time step frequency - By default, stepping behavior isin adaptive mode, where physicssteps at either 60 Hz, 120 Hz, or 240 Hz, depending on the complexity of thephysics mechanism.

    A fixed mode with improved accuracy of physics is also available, whichforces all physics assemblies to step at 240 Hz (four times per frame). Thisresults in significantly more computation each frame.

  • Excessive number of complexity of simulated objects - The more 3Dassemblies that are simulated, the longer physics computations take eachframe. Often, experiences will have objects being simulated that do not needto be or will have mechanisms that have more constraints and joints than theyneed.

  • Overly precise collision detection - Mesh parts have acollision fidelity property for detectingcollision, which offers a variety of modes with different levels ofperformance impact. Precise collision detection mode for mesh parts has themost expensive performance cost and takes the engine longer to compute.


  • Anchor parts that don't require simulation - Anchor all parts that don'tneed to be driven by physics, such as for static NPCs.

  • Use adaptive physics stepping - Adaptive stepping dynamically adjusts therate of physics calculations for physics mechanisms, allowing physics updatesto be made less frequently in some cases.

  • Reduce mechanism complexity

    • Where possible, minimize the number of physics constraints or joints in anassembly.

    • Reduce the amount of self-collision within a mechanism, such as by applyinglimits or no-collision constraints to ragdoll limbs to prevent them fromcolliding with each other.

  • Reduce the usage of precise collision fidelity for meshes

    • For small or non-interactable objects where users would rarely notice thedifference, use box fidelity.

    • For small-medium size objects, use box or hull fidelities, depending on theshape.

    • For large and very complex objects, build out custom collisions usinginvisible parts when possible.

    • For objects that don't require collisions, disable collisions and use box orhull fidelity, since the collision geometry is still stored in memory.

    • You can render collision geometry for debug purposes in Studio using File > Studio Settings > Studio > Visualization > Show Decomposition Geometry.

      Alternatively, apply the CollisionFidelity=Precise filter to the Explorer, which shows a count of all mesh parts with the precise fidelity and allows you to easily select them.

    • For an in-depth walkthrough on how to choose a collision fidelity option that balances your precision and performance requirements, see Set Physics and Rendering Parameters.

MicroProfiler Scopes

ScopeAssociated Computation
physicsSteppedOverall physics computation
worldStepDiscrete physics steps taken each frame

Physics Memory Usage

Physics movement and collision detection consumes memory. Mesh parts have a collision fidelity property that determines the approach that's used to evaluate the collision bounds of the mesh.

Common Problem

The default and precise collision detection modes consumes significantly more memory than the two other modes with lower fidelity collision shapes.

If you see high levels of memory consumption under PhysicsParts, you might need to explore reducing the collision fidelity of parts in your experience.

How to Mitigate

To reduce memory used for collision fidelity:

  • Reduce the number of unique meshes.

  • For parts that do not need collisions, disable their collisions using by setting CanCollide, CanTouch and CanQuery to false.

  • Reduce fidelity of collisions, using the MeshPart.CollisionFidelity setting. Box has the lowest memory overhead, and Default and Precise are generally more expensive.

    • It's generally safe to set any small anchored part's collision fidelity to Box.

    • For very complex large meshes, you may want to build your own collision mesh out of smaller objects with box collision fidelity.

You can visualize the collision mesh in your environment by rendered collisionmeshes using Show Decomposition Geometry in Studio settings.


Humanoid is a class that provides a wide range of functionalities toplayer and non player characters (NPCs). Although powerful, a Humanoidcomes with a significant computation cost.

Common Problems

  • Leaving all HumanoidStateTypes enabled on NPCs - There is a performancecost to leaving certain HumanoidStateTypesenabled. Disable any that are not needed for your NPCs. Forexample, unless your NPC is going to climb ladders, it's safe to disablethe Climbing state.

  • Instantiating, modifying, and respawning models with Humanoids frequently

    • This can be intensive for the engine to process, particularly if these models use Layered clothing. This also can be particularly problematic in experiences where avatars respawn often.

    • In the MicroProfiler, lengthy updateInvalidatedFastClusters tags(over 4 ms) are often a signal that avatar instantiation/modification istriggering excessive invalidations.

  • Using Humanoids in cases where they are not required - Static NPCs that donot move generally have no need for the Humanoid class.

  • Playing animations on a large number of NPCs from the server - NPCanimations that run on the server need to be simulated on the server and replicatedto the client. This can be unnecessary overhead.


  • Play NPC animations on the client - In experiences with a large number ofNPCs, consider creating the Animator on the client and running theanimations locally. This reduces the load on the server and the need forunnecessary replication. It also makes additional optimizations possible (suchas only playing animations for NPCs who are near to the character).

  • Use performance-friendly alternatives to Humanoids - NPC models don'tnecessarily need to contain a humanoid object.

    • For static NPCs, use a simple AnimationController, because theydon't need to move around but just need to play animations.

    • For moving NPCs, consider implementing your own movement controller andusing an AnimationController for animations, depending on thecomplexity of your NPCs.

  • Disable unused humanoid states - Use Humanoid:SetStateEnabled() toonly enable necessary states for each humanoid.

  • Pool NPC models with frequent respawning - Instead of destroying an NPCcompletely, send the NPC to a pool of inactive NPCs. This way, when a new NPCis required to respawn, you can simply reactivate one of the NPCs from the pool. Thisprocess is called pooling, which minimizes the amount of times characters needto be instantiated.

  • Only spawn NPCs when users are nearby - Don't spawn NPCs when users aren'tin range, and cull them when users leave their range.

  • Avoid making changes to the avatar hierarchy after it is instantiated - Certain modifications to an avatar hierarchy have significant performance implications. Some optimizations are available:

    • For custom procedural animations, don't update the JointInstance.C0 and JointInstance.C1 properties. Instead, update the Motor6D.Transform property.

    • If you need to attach any BasePart objects to the avatar, do so outside ofthe hierarchy of the avatar Model.

MicroProfiler Scopes

ScopeAssociated Computation
stepHumanoidHumanoid control and physics
stepAnimationHumanoid and animator animation
updateInvalidatedFastClustersAssociated with instantiating or modifying an avatar


A significant portion of the time the client spends each frame is on renderingthe scene in the current frame. The server doesn't do any rendering, so thissection is exclusive to the client.

Draw Calls

A draw call is a set of instructions from the engine to the GPU to rendersomething. Draw calls have significant overhead. Generally, the fewer drawcalls per frame, the less computational time is spent rendering a frame.

You can see how many draw calls are currently occurring with the Render Stats > Timing item in Studio. You can view Render Stats in the client by pressing ShiftF2.

The more objects that need to be drawn in your scene in a given frame, the moredraw calls are made to the GPU. However, the Roblox engine utilizes a processcalled instancing to collapse identical meshes with the same texturecharacteristics into a single draw call. Specifically, multiple meshes with thesame MeshId are handled in a single draw call when:

  • SurfaceAppearances are identical.TextureIDs are identical whenSurfaceAppearance doesn't exist.

  • Materials are identical when both SurfaceAppearance andMeshPart.TextureID don't exist.

Other Common Problems

  • Excessive object density - If a large number of objects are concentratedwith a high density, then rendering this area of the scene requires moredraw calls. If you are finding your frame rate drops when looking at a certainpart of the map, this can be a good signal that object density in this area istoo high.

  • Missed Instancing Opportunities - Often, a scene will include the same meshduplicated a number of times, but each copy of the mesh has different mesh ortexture asset IDs. This prevents instancing and can lead to unnecessary drawcalls.

    A common cause of this problem is when an entire scene is imported at once,rather than individual assets being imported into Roblox and then duplicatedpost-import to assemble the scene.

  • Excessive object complexity - Although not as important as the number ofdraw calls, the number of triangles in a scene does influence how long a frametakes to render. Scenes with a very large number of very complex meshes are acommon problem, as are scenes with the MeshPart.RenderFidelity property setto Precise on too many meshes.

  • Excessive shadow casting - Handling shadows is an expensive process, andmaps that contain a high number and density of light objects that cast shadows(or a high number and density of small parts influenced by shadows) are likely tohave performance issues.


  • Instancing identical meshes and reducing the amount of unique meshes - Ifyou ensure all identical meshes to have the same underlying asset IDs, theengine can recognize and render them in a single draw call. Make sure to onlyupload each mesh in a map once and then duplicate them in Studio for reuse ratherthan importing large maps as a whole, which might cause identical meshes tohave separate content IDs and be recognized as unique assets by the engine.Packages are a helpful mechanism forobject reuse.

  • Culling - Culling describes the process of eliminating draw calls forobjects that don't factor into the final rendered frame. By default, theengine skips draw calls for objects outside the camera's field of view(frustum culling), but doesn't skip draw calls for objects occluded from viewby other objects (occlusion culling). If your scene has a large number of drawcalls, consider implementing your own additional culling at runtimedynamically for every frame, such as applying the following common strategies:

    • Hide MeshPart and BasePart that are far away from the cameraor setting.

    • For indoor environments, implement a room or portal system that hidesobjects not currently occupied by any users.

  • Reducing render fidelity - Set render fidelity to Automatic orPerformance. This allows meshes to fall back to less complexalternatives, which can reduce the number of polygons that need to be drawn.

  • Disabling shadow casting on appropriate parts and light objects - Thecomplexity of the shadows in a scene can be reduced by selectively disablingshadow casting properties on light objects and parts. This can be done at edittime or dynamically at runtime. Some examples are:

    • Use the BasePart.CastShadow property to disable shadow casting onsmall parts where shadows are unlikely to be visible. This can beparticularly effective when only applied to parts that are far away from theuser's camera.

      This might result in visual artifacts on shadows.

    • Disable shadows on moving objects when possible.

    • Disable Light.Shadows on light instances where the object doesnot need to cast shadows.

    • Limit the range and angle of light instances.

    • Use fewer light instances.

MicroProfiler Scopes

ScopeAssociated Computation
Prepare and PerformOverall rendering
Perform/Scene/computeLightingPerformLight grid and shadow updates
Update LightGridVoxel light grid updates
ShadowsShadow mapping
Perform/Scene/UpdateViewPreparation for rendering and particle updates
Perform/Scene/RenderViewRendering and post processing

Networking and Replication

Networking and replication describes the process by which data is sent between theserver and connected clients. Information is sent between the client and serverevery frame, but larger amounts of information require more compute time.

Common Problems

  • Excessive remote traffic - Sending a large amount of data throughRemoteEvent or RemoteFunction objects or invoking them very frequentlycan lead to a large amount of CPU time being spent processing incoming packetseach frame. Common mistakes include:

    • Replicating data every frame that does not need to be replicated.

    • Replicating data on user input without any mechanism to throttle it.

    • Dispatching more data than is required. For example, sending the player'sentire inventory when they purchase an item rather than just details of theitem purchased.

  • Creation or removal of complex instance trees - When a change is made tothe data model on the server, it is replicated to connected clients. Thismeans creating and destroying large instance hierarchies like maps at runtimecan be very network intensive.

    A common culprit here is the complex animation data saved by Animation Editorplugins in rigs. If these aren't removed before the game is published andthe animated model is cloned regularly, a large amount of data will bereplicated unnecessary.

  • Server-side TweenService - If TweenService is used to tween an objectserver side, the tweened property is replicated to each client everyframe. Not only does this result in the tween being jittery as clients'latency fluctuates, but it causes a lot of unnecessary network traffic.


You can employ the following tactics to reduce unnecessary replication:

  • Avoid sending large amounts of data at once through remote events.Instead, only send necessary data at a lower frequency. For example, for acharacter's state, replicate it when it changes rather than everyframe.

  • Chunk up complex instance trees like maps and load them in pieces todistribute the work replicating these across multiple frames.

  • Clean up animation metadata, especially the animation directory of rigs,after importing.

  • Limit unnecessary instance replication, especially in cases where theserver doesn't need to have knowledge of the instances being created. Thisincludes:

    • Visual effects such as an explosion or a magic spell blast. The server onlyneeds to know the location to determine the outcome, while the clients cancreate visuals locally.

    • First-person item view models.

    • Tween objects on the client rather than the server.

MicroProfiler Scopes

ScopeAssociated Computation
ProcessPacketsProcessing for incoming network packets, such as event invocations and property changes
Allocate Bandwidth and Run SendersOutgoing events relevant on servers

Asset Memory Usage

The highest impact mechanism available to creators to improve client memory usage is to enable Instance Streaming.

Instance Streaming

Instance streaming selectively loads out parts of the data model that are not required, which can lead to considerably reduced load time and increase the client's ability to prevent crashes when it comes under memory pressure.

If you are encountering memory issues and have instance streaming disabled, consider updating your experience to support it, particularly if your 3D world is large. Instance streaming is based on distance in 3D space, so larger worlds naturally benefit more from it.

If instance streaming is enabled, you can increase the aggressiveness of it. For example, consider:

  • Reducing use the persistent StreamingIntegrity.

  • Reducing the streaming radius.

For more information on streaming options and their benefits, see Streaming Properties.

Other Common Problems

  • Asset Duplication - A common mistake is to upload the same asset multiple times resulting in different asset IDs. This can lead to the same content being loaded into memory multiple times.

  • Excessive asset volume - Even when assets are not identical, there are cases when opportunities to reuse the same asset and save memory are missed.

  • High resolution textures - Graphics memory consumption for a texture is unrelated to the size of the texture on the disk, but rather the number of pixels in the texture.

    • For example, a 1024x1024 pixel texture consumes four times the graphics memory of a 512x512 texture.

    • Images uploaded to Roblox are transcoded to a fixed format, so there is no memory benefit to uploading images in a color model associated with fewer bytes per pixel. Though the engine automatically downscales texture resolution on some devices, the extent of the downscale depends on the device characteristics, and excessive texture resolution can still cause problems.

    • You can identify the graphics memory consumption for a given texture by expanding the GraphicsTexture category in the Developer Console.


  • Only upload assets once - Reuse the same asset ID across objects and ensure the same assets, especially meshes and images, aren't uploaded separately multiple times.

  • Find and fix duplicate assets - Look for identical mesh parts and textures that are uploaded multiple times with different IDs.

    • Though there is no API to detect similarity of assets automatically, you can collect all the image asset IDs in your place (either manually or with a script), download them, and compare them using external comparison tools.

    • For mesh parts, the best strategy is to take unique mesh IDs and organize them by size to manually identify duplicates.

  • Importing assets in map separately - Instead of importing an entire map at once, import and reconstruct assets in the map individually and reconstruct them. The 3D importer doesn't do any de-duplication of meshes, so if you were to import a large map with a lot of separate floor tiles, each of those tiles would be imported as a separate asset (even if they are duplicates). This can lead to performance and memory issues down the line, as each mesh is treated as individually and takes up memory and draw calls.

  • Limit the pixels of images to no more than the necessary amount. Unless an image is occupying a large amount of physical space on the screen, it usually needs at most 512x512 pixels. Most minor images should be smaller than 256x256 pixels.

  • Use Trim Sheets to ensure maximum texture reuse in 3D maps. For steps and examples on how to create trim sheets, see Creating Trim Sheets.

Load Times

Many experiences implement custom loading screens and use the ContentProvider:PreloadAsync() method to request assets so that images, sounds, and meshes are downloaded in the background.

The advantage of this approach is that it lets you ensure important parts of your experience are fully loaded without pop-in. However, a common mistake is overutilizing this method to preload more assets than are actually required.

An example of a bad practice is loading the entire Workspace. While this might prevent texture pop-in, it significantly increases load time.

Instead, only use ContentProvider:PreloadAsync() in necessary situations, which include:

  • Images in the loading screen.

  • Important images in your experience menu, such as button backgrounds and icons.

  • Important assets in the starting or spawning area.

If you must load a large number of assets, we recommend you provide a Skip Loading button.

Improving Performance | Documentation - Roblox Creator Hub (2024)


Top Articles
Latest Posts
Article information

Author: Jerrold Considine

Last Updated:

Views: 6137

Rating: 4.8 / 5 (78 voted)

Reviews: 85% of readers found this page helpful

Author information

Name: Jerrold Considine

Birthday: 1993-11-03

Address: Suite 447 3463 Marybelle Circles, New Marlin, AL 20765

Phone: +5816749283868

Job: Sales Executive

Hobby: Air sports, Sand art, Electronics, LARPing, Baseball, Book restoration, Puzzles

Introduction: My name is Jerrold Considine, I am a combative, cheerful, encouraging, happy, enthusiastic, funny, kind person who loves writing and wants to share my knowledge and understanding with you.