Tikfollowers

Stable diffusion checkpoint vs model. Text-to-Image with Stable Diffusion.

This model card focuses on the model associated with the Stable Diffusion v2-1-base model. What platforms do you use to access the UI ? Windows. It is a free and full-featured GUI. 5. CHECK "ABOUT THIS VERSION" ON THE RIGHT IF YOU ARE NOT ON "V6" FOR IMPORTANT INFORMATION. Anime models are specially trained to generate anime images. The ADHERENCE is pretty good to. You use an anime model to generate anime images. DALL·E 3. Nov 21, 2023 · 🌟 Unlock the mysteries of Stable Diffusion with our clear and concise guide! 🌟Join us as we break down complex AI terms like 'LoRA', 'Checkpoint', and 'Con Sep 15, 2023 · Developed by: Stability AI. stablediffusion101. ckpt to use the v1. 1. At the time of release (October 2022), it was a massive improvement over other anime models. Add a Comment. Below is an example. A diffusion model, which repeatedly "denoises" a 64x64 latent image patch. Generally speaking, diffusion models are machine learning systems that are trained to denoise random Gaussian noise step by step, to get to a sample of interest, such as an image. SD 1. Example prompt: hyper realistic gopro action photo of a beautiful 20yo Dutch girl with small breasts (looking at camera:1. Aug 28, 2023 · NAI is a model created by the company NovelAI modifying the Stable Diffusion architecture and training method. I already had my checkpoints on the NAS so it wasn't difficult for me to test moving them all and pointing to the NAS. 5 base model. ckpt). It handles various ethnicities and ages with ease. For more information, please have a look at the Stable Diffusion. Capable of creating both NSFW and SFW images but also great scenery, both in landscape and portrait. May 16, 2024 · Make sure you place the downloaded stable diffusion model/checkpoint in the following folder "stable-diffusion-webui\models\Stable-diffusion" : Stable Diffusion in the Cloud⚡️ Run Automatic1111 in your browser in under 90 seconds. However See relevant content for stablediffusion101. Using the model with the Stable Diffusion Colab notebook is easy. This is good for inference (again, especially with May 23, 2023 · 三個最好的寫實 Stable Diffusion Model. 225,000 steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve classifier-free guidance sampling. like2. CyberRealistic. Stable Diffusion consists of three parts: A text encoder, which turns your prompt into a latent vector. Checkpoint contains all the data, including EMA data which is enough for image generation and also the full data needed to resume training on that model. Note: Stable Diffusion v1 is a general text-to-image diffusion Jan 17, 2024 · Using the model. Stable Diffusion Checkpoints are pre-trained models that learned from images sources, thus being able to create new ones based on the learned knowledge. LyCORIS is a collection of LoRA-like methods. The Stable-Diffusion-v1-3 checkpoint was initialized with the weights of the Stable-Diffusion-v1-2 checkpoint and subsequently fine-tuned on 195,000 steps at resolution 512x512 on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve classifier-free guidance sampling . Width and height: The size of the output image. 0, generates high quality photorealsitic images, offers vibrant, accurate colors, superior contrast, and detailed shadows than the base SDXL at a native resolution of 1024x1024. It is a very flexible checkpoint and can generate a wide range of styles and realism levels. Apr 6, 2023 · Since its release in 2022, Stable Diffusion has proved to be a reliable and effective deep learning and text-to-image generation model. For more information about how Stable Diffusion functions, please have a look at 🤗's Stable Diffusion with D🧨iffusers blog. safetensor. Over time, the Stable Diffusion artificial intelligence (AI) art generator has significantly advanced, introducing new and progressive checkpoints that We would like to show you a description here but the site won’t allow us. Well, technically, you don’t have to. The only weird thing is some models actually do not have hashes. DreamShaper XL. Jan 16, 2024 · Stable Diffusion—at least through Clipdrop and DreamStudio—is simpler to use, and can make great AI-generated images from relatively complex prompts. 4 and v1. 2 days ago · If there is one component in the pipeline that has the most impact, it must be the model. Trong khi đó LoRA giúp cho người dùng dễ dàng tinh Jul 7, 2024 · In the Stable Diffusion checkpoint dropdown menu, select the model you want to use with ControlNet. You can use Stable Diffusion Checkpoints by placing the file within "/stable-diffusion-webui/models/Stable-diffusion" folder. Nov 20, 2023 · Checkpoint Merger is a functionality that allows you to combine two or three pre-trained Stable Diffusion models to create a new model that embodies the features of the merged models. Reducing the risk of overfitting by allowing early stopping based on validation performance. Comparing different model versions and fine-tuning hyperparameters. Browse checkpoint Stable Diffusion models, checkpoints, hypernetworks, textual inversions, embeddings, Aesthetic Gradients, and LORAs Mar 12, 2023 · Trong Stable Diffusion, Model CheckPoint và LoRA đóng vai trò rất quan trọng để giải quyết các vấn đề liên quan đến việc huấn luyện mô hình. 4, waifudiffusion, ghiblimodel, dreamboothtrainingeexample, etc. If you ever wished a model existed that fit your style, or wished you could change something about a model you stable-diffusion-inpainting. 68k. ckpt here. r. VAE: The Variational AutoEncoder converts the image between the pixel and the latent spaces. This stable-diffusion-2-1-base model fine-tunes stable-diffusion-2-base ( 512-base-ema. The DiffusionPipeline class is a simple and generic way to load the latest trending diffusion model from the Hub. Nov 2, 2022 · Stable Diffusion is a system made up of several components and models. Model type: Diffusion-based text-to-image generative model. Introduction. 3), windy, wearing old trashy worn torn When using stable diffusion, loading a checkpoint allows you to generate images Based on the learned knowledge accumulated by the model up until that point in its training. Feb 18, 2024 · This model card focuses on Role Playing Game portrait similar to Baldur's Gate, Dungeon and Dragon, Icewindale, and more modern style of RPG character. 1-base, HuggingFace) at 512x512 resolution, both based on the same number of parameters and architecture as 2. This process aims to enhance the quality and versatility of the generated AI images. However, pickle is not secure and pickled files may contain malicious code that can be executed. Thanks to a generous compute donation from Stability AI and support from LAION, we were able to train a Latent Diffusion Model on 512x512 images from a subset of the LAION-5B database. 89f9faa. Text-to-Image with Stable Diffusion. Hires. What browsers do you use to access Sep 27, 2023 · LyCORIS and LoRA models aim to make minor adjustments to a Stable Diffusion model using a small file. safetensorsへの変換. In the Web UI, it is called the “checkpoint”, named after how we saved the model when we trained a deep learning model. Learn to fine-tune Stable Diffusion for photorealism; Use it for free: Stable Diffusion v1. Training an AI Model Training an AI model involves feeding it with data and allowing it to learn Patterns and gain knowledge from that data. Oct 18, 2022 · Stable Diffusion is a latent text-to-image diffusion model. Version 2. Images can be generated from just the EMA so most model files remove the other data to shrink the file size. You We would like to show you a description here but the site won’t allow us. Checkpoint 1: Realistic Vision 3. Training data is used to change weights in the model so it will be capable of rendering images similar to the training data, but care needs to be taken that it does not "override" existing data. Apr 17, 2024 · DALL·E 3 feels better "aligned," so you may see less stereotypical results. ago. Might be harder to do photorealism compared to realism focused models, as it might be hard to do anime compared to anime focused models, but it can do both pretty well if you're skilled enough. The Stable-Diffusion-Inpainting was initialized with the weights of the Stable-Diffusion-v-1-2. The resulting images exhibit a unique Blend of photorealism and dreamlike elements. 5 vs Openjourney (Same parameters, just added "mdjrny-v4 style" at the beginning): 🧨 Diffusers This model can be used just like any other Stable Diffusion model. Jan 11, 2024 · Checkpoints like Copax Timeless SDXL, Zavychroma SDXL, Dreamshaper SDXL, Realvis SDXL, Samaritan 3D XL are fine-tuned on base SDXL 1. ckpt – format stores and saves models. 4. There are other types of Stable Diffusion models like LoRAs, LoCONs, LoHAs, LECOs and so on, but we will only be looking at checkpoints today. Model CheckPoint giúp cho việc định hình kiểu phong cánh ảnh AI với mô hình dữ liệu lớn. May 13, 2024 · Pony Diffusion V6 is a versatile SDXL finetune capable of producing stunning SFW and NSFW visuals of various anthro, feral, or humanoids species and their interactions based on simple natural language prompts. Use the paintbrush tool to create a mask on the face. This model adds a dreamy and ethereal effect to the images, enhancing their artistic appeal. Feb 18, 2024 · Stable Diffusion Checkpoint: Select the model you want to use. Before you Feb 22, 2023 · Les modèles, également appelés checkpoints (ou points de contrôle en français), sont des fichiers créés suite à un entraînement de Stable Diffusion partir d’images spécifiques. It is a Latent Diffusion Model that uses two fixed, pretrained text encoders (OpenCLIP-ViT/G and CLIP-ViT/L). The Stable-Diffusion-v-1-4 checkpoint was initialized with the weights of the Stable-Diffusion-v-1-2 checkpoint and subsequently fine-tuned on 225k steps at resolution 512x512 on "laion-aesthetics v2 5+" and Model type: Diffusion-based text-to-image generative model; License: CreativeML Open RAIL++-M License; Model Description: This is a model that can be used to generate and modify images based on text prompts. 基本上使用 Stable Diffusion 也不會乖乖地只用官方的 1. Model Access Each checkpoint can be used both with Hugging Face's 🧨 Diffusers library or the original Stable Diffusion GitHub repository. Nov 26, 2022 · It's also possible that it prefers local and if a model is not in the local directory it checks the one from the command argument. 65 weigth that I later offset to 1 (very easy to do with ComfyUI). 98 on the same dataset. For more information, please refer to Training. 20% bonus on first deposit. com currently does not have any sponsors for you. 2. 1. The most common architecture nowadays is the version 1. Prompt: Describe what you want to see in the images. 0 and fine-tuned on 2. 5. Stable Diffusion 3 combines a diffusion transformer architecture and flow matching. Jun 21, 2023 · Stable diffusion checkpoints are crucial for: Preventing data loss by saving model parameters during training. Use it with the stablediffusion repository: download the v2-1_512-ema-pruned. Mar 4, 2024 · The array of fine-tuned Stable Diffusion models is abundant and ever-growing. 5 base model . 1 (VAE) So this model is a Checkpoint but it's called VAE, So I should use it as VAE but why it works when I use it Dec 28, 2022 · The diffusion model uses latent vectors from these two spaces along with a timestep embedding to predict the noise that was added to the image latent. Stable Diffusion. 5 models, each with their unique allure and general-purpose capabilities, to the SDXL model, a veritable upgrade boasting higher resolutions and quality. Stable Diffusion is a deep learning, text-to-image model released in 2022 based on diffusion techniques. SDXL 1. 5). 1-v, Hugging Face) at 768x768 resolution and (Stable Diffusion 2. Pro tip if you store many models and only use a few at a time: You can pack the models into a ZPAQ archive. fix: R-ESRGAN 4x+ | Steps: 10 | Denoising: 0. . Click the Send to Inpaint icon below the image to send the image to img2img > inpainting. To incorporate the Dreamlike diffusion model, it is combined with the primary model. Stable Diffusion XL. Luckily, you can use inpainting to fix it. 4 file. 但 Civitai 上有成千上萬個 Model 要逐個下載再試也花很多時間,以下是我強力推薦生成寫實圖片的 Checkpoint Model:. In that case do i just pass the checkpoint name without the hash? Jul 13, 2023 · A checkpoint model is a pre-trained Stable Diffusion weight, also known as a checkpoint file (. I usually find good results at 0. This model card focuses on the model associated with the Stable Diffusion v2-1 model, codebase available here. If you’ve followed my installation and getting started guides, you would already have DreamShaper installed. The Turbo model is trained to generate images from 1 to 4 steps using Adversarial Diffusion Distillation (ADD). Checkpoint model (trained via Dreambooth or similar): another 4gb file that you load instead of the stable-diffusion-1. Dec 7, 2022 · December 7, 2022. They are LoCon, LoHa, LoKR, and DyLoRA. Whilst the then popular Waifu Diffusion was trained on SD + 300k anime images, NAI was trained on millions. 💡. 1 ), and then fine-tuned for another 155k extra steps with punsafe=0. It is available to load without any moving Jun 5, 2023 · Size: 512x768 or 768x512. Checkpoint 2: CyberRealistic 4. 1 Model 來生圖,到 Civitai 下載幾百GB 也是常態。. LoRAs can be applied on top of a base Stable Diffusion checkpoint to introduce new capabilities like improved quality, specific art styles, characters, objects, or environments. And select the model title based on the matching model name. Nov 26, 2023 · A checkpoint is a snapshot during the training that captures the state of a model at a specific stage in the training process. This is a work in progress. LoRA: functions like dreambooth, but instead of changing the entire model, creates a small file external to the model, that you can use with models. baloney8sammich. Stable Diffusion XL (SDXL) is a powerful text-to-image generation model that iterates on the previous Stable Diffusion models in three key ways: the UNet is 3x larger and SDXL combines a second text encoder (OpenCLIP ViT-bigG/14) with the original text encoder to significantly increase the number of parameters. Feb 22, 2024 · The Stable Diffusion 3 suite of models currently ranges from 800M to 8B parameters. Feb 1, 2024 · Version 8 focuses on improving what V7 started. Sort by: bloc97. The . 5」と呼ばれるモデルしか入っていません。 Jan 19, 2024 · Training on this model is much more effective compared to NAI, so at the end you might want to adjust the weight or offset (I suspect that's because NAI is now much diluted in newer models). safetensors, although it doesn’t always work depending on the model. MajicMix Realistic. You should now be on the img2img page and Inpaint tab. Now scroll down once again until you get the ‘Quicksetting list’ . It helps artists, designers, and even amateurs to generate original images using simple text descriptions. This guide will show you how you load . to get started. 👉 START FREE TRIAL 👈. Enter the captivating realm of Stable Diffusion, a local installation tool committed to pushing the boundaries of realism in image generation. Usage. You can use the model checkpoint file in AUTOMATIC1111 GUI. It is a Latent Diffusion Model that uses two fixed, pretrained text encoders ( OpenCLIP-ViT/G and CLIP-ViT/L ). I find it's better able to parse longer, more nuanced instructions and get more details right. So let’s leave the job to the professionals. Merging the checkpoints by averaging or mixing the weights might yield better results. Oct 15, 2022 · it whould be nice if we could have a dropdown menu to select a diferent models and custom ones too ( sd1. New stable diffusion model (Stable Diffusion 2. Most of the sample images follow this format. Stable Diffusion v1-2 Model Card. The Stable-Diffusion-v1-2 checkpoint was initialized May 12, 2024 · Hyper-SDXL vs Stable Diffusion Turbo. In other words, checkpoints are a type of AI models. 5/2. We would like to show you a description here but the site won’t allow us. For anyone using Atomic's WebUI who wanted to try the Sigmoid options, they were removed because they can be reproduced using the Weighted Sum option and a bit of math: For sigmoid: weighted_alpha = sigmoid_alpha * sigmoid_alpha * (3 - (2 * sigmoid_alpha)) For inverse sigmoid: Jan 19, 2024 · DreamShaper by Lyon is the checkpoint I recommend to all Stable Diffusion beginners. This stable-diffusion-2-1 model is fine-tuned from stable-diffusion-2 ( 768-v-ema. Model. 1-v, HuggingFace) at 768x768 resolution and (Stable Diffusion 2. [1] Introduced in 2015, diffusion models are trained with the objective of removing successive applications of Gaussian noise on training images which can be thought of as a sequence of denoising autoencoders. A pth file is simply a PyTorch file. Conclusion. This notebook aims to be an alternative to WebUIs while offering a simple and lightweight GUI for anyone to get started Nov 29, 2022 · What I am doing is I hit that endpoint, loop through the model titles and when I loop split by " " so i separate the model name and the hash. 98. A checkpoint file may also be called a model file. 5 (SD 1. 以下のように変換元モデルのフォルダ、変換先の. Both modify the U-Net through matrix decomposition, but their approaches differ. Checkpoint Comparison 6. ckpt) with 220k extra steps taken, with punsafe=0. As we look under the hood, the first observation we can make is that there’s a text-understanding component that translates the text information into a numeric representation that captures the ideas in the text. Feb 25, 2023 · The process of using autoMBW for checkpoint merging takes a tremendous amount of time. Stable Diffusion Turbo is a fast model method implemented for SDXL and Stable Diffusion 3. The generative artificial intelligence technology is the premier product of Stability AI and is considered to be a part of the ongoing artificial intelligence boom . stable-diffusion-v1-4 Resumed from stable-diffusion-v1-2. 3 days ago · Description. Apr 27, 2024 · Instead of updating the full model, LoRAs only train a small number of additional parameters, resulting in much smaller file sizes compared to full fine-tuned models. EpiCPhotoGasm. May 14, 2023 · Select another stable diffusion checkpoint in the sellection window, and the select input appears loading icon, but nothing happened on console. 0, on a less restrictive NSFW filtering of the LAION-5B dataset. With a checkpoint merger, you can select a "base" model and one or two other The Stable-Diffusion-v1-5 checkpoint was initialized with the weights of the Stable-Diffusion-v1-2 checkpoint and subsequently fine-tuned on 595k steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10% dropping of the text-conditioning to improve classifier-free guidance sampling. The first Stable Diffusion male model on our list is Juggernaut XL which is one of the best SDXL models out there. Dec 3, 2022 · 使用方法 DiffusersからStable Diffusion . ) you could place theses mo The secondary model used in checkpoint merger is the Dreamlike diffusion model. So, while both . safetensors is a secure alternative to pickle, making it ideal for sharing model weights. Checkpoint 3: epiCRealism 5. 4- Dreambooth is a method to fine-tune a network. ADD uses a combination of reconstruction and adversarial loss to improve image sharpness. t this loss using gradient descent. Architecture. Feb 8, 2024 · Stable Diffusion Web UI(AUTOMATIC1111)では、画面の一番上にある「Stable Diffusion checkpoint」というプルダウンからモデルを選択して、生成画像のタッチ(画風)を変えることができます。 ですが、最初は「Stable Diffusion v1. ckpt) with an additional 55k steps on the same dataset (with punsafe=0. A Stable Diffusion model has three main parts: MODEL: The noise predictor model in the latent space. Dream Diffusion SD3 Likeness is the closest I could get to a similar render to SD3. 0. ckptファイルを指定してください(実際には一行で記述します)。 We would like to show you a description here but the site won’t allow us. Commit where the problem happens. Stable Diffusion is a latent diffusion model conditioned on the (non-pooled) text embeddings of a CLIP ViT-L/14 text encoder. The researchers introduced block-removed Feb 12, 2024 · SD 1. The checkpoint – or . It uses the from_pretrained() method to automatically detect the correct pipeline class for a task from the checkpoint, downloads and caches all the required configuration and weight files, and returns a pipeline ready for inference. Though, again, the results you get really depend on what you ask for—and how much prompt engineering you're prepared to do. Stable Diffusion uses a kind of diffusion model (DM), called a latent diffusion model (LDM). Model Description: This is a model that can be used to generate and modify images based on text prompts. Initially there was only one inpainting model - trained for base 1. In the txt2image tab, write a prompt and, optionally, a negative prompt to be used by ControlNet. Resources for more information: GitHub Stable Diffusion. ckpt/. Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input. 5 model, but luckily by adding weight difference between other model and 1. With the following parameters: On an RTX4090, this process can take up to an hour or more to run. This checkpoint model is capable of generating a large variety of male characters that look stunning. Stable Diffusion v1 refers to a specific configuration of the model architecture that uses a downsampling-factor 8 autoencoder with an 860M UNet and CLIP ViT-L/14 text encoder for the diffusion model. It achieves higher compression than 7zip in this case, due to its native data deduplication. Ces modèles peuvent être adaptés à un style, un genre ou un sujet particulier, mais il existe des modèles génériques, capables de générer toutes Stable Diffusion Interactive Notebook 📓 🤖. The Web UI supports multiple Stable Diffusion model architectures. It is not one monolithic model. What It Does: Highly tuned for photorealism, this model excels in creating realistic images with minimal prompting. Jul 6, 2024 · Use the Load Checkpoint node to select a model. This approach aims to align with our core values and democratize access, providing users with a variety of options for scalability and quality to best meet their creative needs. Here is the GitHub repository Explore thousands of high-quality Stable Diffusion models, share your AI-generated art, and engage with a vibrant community of creators Confusion on Model Types (Checkpoint vs VAE) Hey community, I don't really get the concept of VAE, I have some VAE files which apply some color correction to my generation but how things like this model work : Realistic Vision v5. ckpt and . What should have happened? the new stable diffusion should be loaded. Here is my attempt as a very simplified explanation: 1- A checkpoint is just the model at a certain training stage. To aid your selection, we present a list of versatile models, from the widely celebrated Stable diffusion v1. Stable Diffusion Inpainting is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input, with the extra capability of inpainting the pictures by using a mask. 45 | Upscale x 2. Enabling the model to resume training after interruptions or crashes. Midjourney, though, gives you the tools to reshape your images. May 7, 2024 · A very versatile model, the more powerfull prompts you give, the better results. I've been to different websites that have published the prompts for the SD3 model and tested a lot of them on this model and it does very well. The model was pretrained on 256x256 images and then finetuned on 512x512 images. ckpt file contains the entire model, typically several GBs in size. Check the examples! Version 7 improves lora support, NSFW and realism. LoRA is the original method. Obviously its hard to know how many tries the SD3 model took to get Aug 1, 2023 · Our work on the SD-Small and SD-Tiny models is inspired by the groundbreaking research presented in the paper " On Architectural Compression of Text-to-Image Diffusion Models . safetensors can achieve the same goal of running a Stable Diffusion model – SafeTensors is clearly the better and safest option! If you’re feeling adventurous, there are methods for converting . See the complete guide for prompt building for a tutorial. Juggernaut XL. • 2 yr. The model is the result of various iterations of merge pack combined with Dreambooth Training. Accessing the Settings: Click the ‘settings’ at the top and scroll down until you find the ‘User interface’ and click on that. DALL·E 3 can sometimes produce better results from shorter prompts than Stable Diffusion does. We provide a reference script for sampling, but there also exists a diffusers integration, which we expect to see more active community development. Select v1-5-pruned-emaonly. You can run it on Windows, Mac, and Google Colab. Stable Diffusion consists of Jul 31, 2023 · Anime checkpoint models. A widgets-based interactive notebook for Google Colab that lets users generate AI images from prompts (Text2Image) using Stable Diffusion (by Stability AI, Runway & CompVis). " This study underscores the potential of architectural compression in text-to-image synthesis using Stable Diffusion models. A decoder, which turns the final 64x64 latent patch into a higher-resolution 512x512 image. EveryDream: think of this as training an entirely new Stable Diffusion, just a much smaller version. EpiCPhotoGasm: The Photorealism Prodigy. 5, to base inpainting model you get new impainting model that inpaints with this other model concepts trained. com. Your new model is saved in the folder AI_PICS/models in your Google Drive. A reconstruction loss is calculated between the predicted noise and the original noise added in step 3. Stable Diffusion base model CAN generate anime images but you won’t be happy with the results. First-time users can use the v1. ckpt files into . A surrealist painting of a cat by Salvador Dali. Jun 22, 2023 · This gives rise to the Stable Diffusion architecture. A CKPT file is a checkpoint file created by PyTorch Lightning, a PyTorch research framework. safetensor files, and how to convert Stable Diffusion model weights stored in other formats to . Best Stable Diffusion Models - PhotoRealistic Styles. CLIP: The language model preprocesses the positive and the negative prompts. Finally, the diffusion model parameters are optimized w. Nov 28, 2023 · This is because the face is too small to be generated correctly. Similar to Google's Imagen , this model uses a frozen CLIP ViT-L/14 text encoder to condition the Reply. (avoid using negative embeddings unless absolutely necessary) From this initial point, experiment by adding positive and negative tags and adjusting the settings. Oct 31, 2023 · LoRAs, on the other hand, are a kind of smaller model (that have to be used in conjunction with a checkpoint) which allow you to impart a particular style to the image or create a specific Nov 22, 2023 · And trust me, setting up Clip Skip in Stable Diffusion (Auto1111) is a breeze! Just follow these 5 simple steps: 1. 探索知乎专栏,发现有趣的问题和答案,深入了解各种话题。 Aug 28, 2023 · Dreambooth: take existing models and incorporate new concepts into them. I don't know what "full ema" means. lq kt ph wz cp ut yy nv ax cx