You are here:Home»KB»Hardware»Other Devices»My TrueNAS Scale Notes
Tuesday, 22 August 2023 10:07

My TrueNAS Scale Notes

Written by

These are my notes on setting up TrueNAS from selecting the hardware to installing and configuring the software. Ypu are expected to have some IT knowledge about hardware and software as these instructions do not cover everything but will answer all of those questions that need answering.

The TrueNAS documentation is well written and is your friend.

My Kit

Hardware Temp - might move

  • ASUS PRIME X670-P WIFI
    • General
      • ASUS PRIME X670 P : I'm not happy! - YouTube
        • The PRIME X670-P is a rather good budget board, except it is not priced at a budget level. Its launching price oscillates between 280 and 300 dollars, and that is almost twice its predecessor launching price.
        • A review.
    • Parts
      • Rubber Things P/N: 13090-00141300 (contains 1 pad) (9mm x 9mm x 1mm)
      • Standoffs P/N: 13020-01811600 (contains 1 screw and 1 standoff) (7.5mm)
      • Standoffs P/N: 13020-01811500 (contains 2 screws and 2 standoffs) (7.5mm) - These appear to be the same as 13020-1811600
    • Flashing BIOS
    • How to turn off all lights
    • Diagnostics / QLED
      • This board only has Q-LED CORE (the power light is flashed with codes)
      • [Motherboard] ASUS motherboard troubleshooting via Q-LED indicators | Official Support | ASUS Global
      • How To Reset ASUS BIOS? All Possible Ways - Most ASUS motherboards offer customizing a wide range of BIOS settings to help optimize system performance. However, incorrectly modifying these advanced options can potentially lead to boot failure or system instability.
      • Asus X670E boot time too long - Republic of Gamers Forum - 906825
        • Q: I am have an issue where my boot up time for my new PC is very slow. i know that the first time boot up when i built the PC is long but this is getting ridiculas.
        • A:
          • All DDR5 systems have longer boot times than DDR4 since they have to do memory tests.
          • Enable Context Restore in the DDR Settings menu of BIOS, you might have another one boot after that which is long, but subsequent boots should me much quicker, until you do a BIOS update or clear CMOS
          • Context Restore retains the last successful POST. POST time depends on the memory parameters and configuration.
          • It is important to note that settings pertaining to memory training should not be altered until the margin for system stability has been appropriately established.
          • The disparity between what is electrically valid in terms of signal margin and what is stable within an OS can be significant depending on the platform and level of overclock applied. If we apply options such as Fast Boot and Context Restore and the signal margin for error is somewhat conditional, changes in temperature or circuit drift can impact how valid the conditions are within our defined timing window.
          • Whilst POST times with certain memory configurations are long, these things are not there to irritate us and serve a valid purpose.
          • Putting the system into S3 Resume is a perfectly acceptable remedy if finding POST / Boot times too long.
      • BIOS
  • AMD PBO (Precision Boost Overdrive)
  • AMD CBS (Custom BIOS Settings)
    • AMD Overclocking Terminology FAQ - Evil's Personal Palace - HisEvilness - Paul Ripmeester
      • AMD Overclocking Terminology FAQ. This Terminology FAQ will cover some of the basics when overclocking AMD based CPU's from the Ryzen series.
      • What is AMD CBS? Custom settings for your Ryzen CPU's that are provided by AMD, CBS stands for Custom BIOS Settings. Settings like ECC RAM that are not technically supported but work with Ryzen CPU's as well as other SoC domain settings.
  • AMD 7900 CPU
    • Ryzen 9 7900x Normal Temps? - CPUs, Motherboards, and Memory - Linus Tech Tips
      • Q: Hey everyone! So I recently got a r9 7900x coupled to a LIAN LI Galahad 240 AIO. It idles at 70C and when I open heavier games the temps spike to 95C and then goes to 90C constantly. I think that this is exaggerated and I will need to repaste and add a lot more paste. This got me wondering though...what's normal temps for the 7900x? I was thinking a 30-40 idle and 85 under load for an avg cpu. Is this realistic?
      • A: The 7900x is actually built to run at 95c 24/7. its confirmed by AMD. Its very different compared to any other CPU architecture on the market. Ryzen 7000 CPUs are defaulted to boost to whatever cooler it has until 95⁰C. It is the setpoint. 
    • Ryzen 9 7900x idle temp 72-82 should i return the cpu? - AMD Community
      • Hi, I just built my first PC in a long time after I switched to mac, and I chose the 7900x with the Noctua NH-U12S redux with 2 Fans. The first day it ran at around 50C but when booted to bios.  When I run windows and look at the temp it always at 72-75 at idle, and when I open visual studio or even Spotify it goes up to 80 -82. I'm getting so confused because everywhere I read people say these processors run hot but at full load its normal for it to operate at 95.. (in cinebench while rendering with all cores it goes up to 92-95).
      • The Maximum Operating Temperature of your CPU is 95c. Once it reaches 95c it will automatically start to throttle and slow down and if it can't it will shut down your computer to prevent damage.
    • Best Thermal Paste for AMD Ryzen 7 7700X – PCTest - Thermal paste is an essential component of any computer system that helps to transfer heat from the CPU to the cooler. It is important to choose the right thermal paste for your system to ensure optimal performance. In this article, we will discuss some of the best thermal pastes for AMD Ryzen 7 7700X. We will provide you with a comprehensive guide on how to choose the right thermal paste for your system and what factors you should consider when making your decision. We will also provide you with a detailed review of each of the thermal pastes we have selected and explain why they are the best options for your system. So, whether you are building a new computer or upgrading an existing one, this article will help you make an informed decision about which thermal paste to use.
  • AMD Wraith Prism Cooler
  • Asus Hyper M.2 x16 Gen 4 Card
  • Asus Standoffs
  • ASUS Rubber Pads / "M.2 rubber pad"
    • There are not thermal transfer pads but are jut a pad to help push NVMe upwards for a good connection to the thermal pads on teh heatsink above. These are more useful for the longer NVMe boards ad they will tend to bow in the middle.
    • M.2 rubber pad for ROG DIMM.2 - Republic of Gamers Forum - 865792
      • I found the following rubber pad in the package of the Rampage VI Omega. Could you please tell me where I have to install this? 
      • This thread has pictures of how a single pre-installed rubber pad looks and shows you the gap and why with single sided NVMe you need to install the second pad on top.
      • This setup uses 2 difference thickness pads but ASUS has changed from you swapping the pads, to you sticking another one on top of the pre-installed pads.
    • M.2 rubber pad on Asus motherboard for single-sided M.2 storage device | Reddit
      • Q:
        • I want to insert a Samsung SSD 970 EVO Plus 1TB in a M.2 slot of the Asus ROG STRIX Z490-E GAMING motherboard.
        • The motherboard comes with a "M.2 Rubber Package" and you can optionally put a "M.2 rubber pad" when installing a "single-sided M.2 storage device" according to the manual: https://i.imgur.com/4HP37NX.webp
        • From my understanding, this Samsung SSD is single-sided because it has chips on one side only.
        • What is this "rubber pad" for? Since it's apparently optional, what are the advantages and disadvantages of installing it? The manual doesn't even explain it, and there are 2 results about it on the whole Internet (besides the Asus manual).
      • A:
        • I found this thread with the same question. Now that I've actually gone through assembly, I have some more insight into this:
        • My ASUS board has a metal heat sink that can screw over an M.2. On the underside of the heat sink, there's a thermal pad (which has some plastic to peel off).
        • The pad on the motherboard is intended to push back against the thermal pad on the heat sink in order to minimize bending of the SSD and provide better contact with the thermal pad. I now realize that the reason ASUS only sent 1 stick-on for a single-sided SSD, is because there's only 1 metal heat sink; the board-side padding is completely unnecessary without the additional pressure of the heat sink and its thermal pad, so slots without the heat sink don't need that extra stabilization.
        • So put the extra sticker with the single-sided SSD that's getting the heat sink, and don't worry about any other M.2s on the board. I left it on the default position by the CPU since it's between that and the graphics card, which makes it the most likely to have any temperature issues.
  • M.2 / NVMe Thermal Pads
    • Best Thermal Pad for M.2 SSD – PCTest - Using a thermal pad on an M.2 SSD is a great way to help keep it running cool and prevent throttling. With M.2 drives becoming increasingly popular, especially in gaming PCs and laptops where heat dissipation is critical, having the right thermal pad is important. In this guide, we’ll cover the benefits of using a thermal pad with an M.2 drive, factors to consider when choosing one, and provide recommendations on the best M.2 thermal pads currently available.
  • Noctua NF-A9 PWM Case Fan
  • Hardware BIOS Clock (RTC) and TrueNAS Time
    • NTP Servers
      • System Settings --> General --> (Localization | NTP Servers)
    • SOLVED - TrueNAS displays time correctly but sets it in BIOS | TrueNAS Community
      sudo bash           (this line might not be needed in TrueNAS SCALE as it does not seem to do anything)
      date
      systemctl stop ntp
      ntpd -g -q
      systemctl start ntp
      hwclock --systohc
      date
      
    • THE ENTIRE TIME SYSTEM!!! | TrueNAS Community
      • UTC = Universal Time Coordinated. Also called Greenwich Time in some countries. It's been a world standard since at least 1960
      • There is a discussion on time on FreeNAS and realted.
    • 7 Linux hwclock Command Examples to Set Hardware Clock Date Time
      • The clock that is managed by Linux kernel is not the same as the hardware clock.
      • Hardware clock runs even when you shutdown your system.
      • Hardware clock is also called as BIOS clock.
      • You can change the date and time of the hardware clock from the BIOS.
      • However, when the system is up and running, you can still view and set the hardware date and time using Linux hwclock command as explained in this tutorial.
  • RAM
  • ECC RAM
    • You need to explicitily enable ECC RAM in your BIOS.
    • ECC RAM uses extra pins on the RAM/Socket so this is why your CPU and Motherboard need to support ECC for it to work.
    • Check you have ECC RAM (installed and enabled)
      • Your ECC RAM is enabled if you see the notification on your dashboard
      • MemTest86
        • In the main menu you can see if you RAM supports ECC RAM or if it is turned off or on.
      • dmidecode
        • 'dmidecode -t 16' or 'dmidecode --type 16' (they are both the same)
          • 'Physical Memory Array' information.
          • If you have ECC RAM the result will look something liek this:
            Handle 0x0011, DMI type 16, 23 bytes
            Physical Memory Array
                    Location: System Board Or Motherboard
                    Use: System Memory
                    Error Correction Type: Multi-bit ECC
                    Maximum Capacity: 128 GB
                    Error Information Handle: 0x0010
                    Number Of Devices: 4
        • 'dmidecode -t 17' or 'dmidecode --type 17' (they are both the same)
          • 'Memory Device' information.
          • If you have ECC ram then the total width of your memory devices will be 72 bits (64 bits data, 8 bits ECC), not 64 bits.
            # non-ECC RAM
            Total Width: 64 bits
            Data Width: 64 bits
            
            # ECC RAM
            Total Width: 74 bits
            Data Width: 64 bits
        • 'dmidecode -t memory'
          • This just runs both the 'Type 16' and 'Type 17' tests one after the other giving you combined results to save time.
    • Create ECC Errors for testing
      • MemTest86 Pro has an ECC injection feature. A current list of chipsets with ECC injection capability supported by MemTest86 can be found here.
      • SOLVED - The usefulness of ECC (if we can't assess it's working)? | TrueNAS Community
        • Q:
          • Given that ECC functionality depends on several components working well together (e.g. cpu, mobo, mem) there are many things that can go wrong resulting in a user detectable lack of ECC support.
          • I consider ECC reporting (and a way to test if that is still working) a requirement as to be able to preemptively replace memory that is about to go bad.
          • I am asking for opinion of the community, and most notably senior technicians @ixsystems, regarding this stance because I am quite a bit stuck now not daring to proceed with a mission critical project.
        • This thread deals with all sorts of crazy ways fo testing ECC RAM from the physical to software Row Hammer tests.
        • This for reference only.
    • ECC Errors being reported
    • Misc
      • Can I install an ECC DIMM on a Non-ECC motherboard? | Integral Memory
        • Most motherboards that do not have an ECC function within the BIOS are still able to use a module with ECC, but the ECC functionality will not work.
        • Keep in mind, there are some cases where the motherboard will not accept an ECC module, depending on the BIOS version.
      • Trying to understand the real impact of not having ECC : truenas | Reddit
        • A1:
          • From everything I've read, there's no inherent reason ZFS needs ECC more than any other system, it's just that people tend to come to ZFS for the fault tolerance and correction and ECC is part of the chain that keeps things from getting corrupted. It's like saying you have the most highly rated safety certification for your car and not wearing your seatbelt - you should have a seatbelt in any car.
        • A2:
          • The TrueNAS forums have a good discussion thread on it, that I think you might have read, Non-ECC and ZFS Scrub? | TrueNAS Community. If not, I strongly encourage it.
          • The idea is, ECC prevents ZFS from incurring bitflip during day-to-day operations. Without ECC, there's always a non-zero chance it can happen. Since ZFS relies on the validity of the checksum when a file is written, memory errors could result in a bad checksum written to disk or an incorrect comparison on a following read. Again, just a non-zero chance of one or both events occurring, not a guarantee. ZFS lacks an "fsck" or "chkdsk" function to repair files, so once a file is corrupted, ZFS uses the checksum to note the file differs from the checksum and recover it, if possible. So, in the case of a corrupted checksum and a corrupted file, ZFS could potentially modify the file even further towards complete unusability. Others can comment if there's any way to detect this, other than via a pool scrub, but I'm unaware.
          • Some people say, "turn off ZFS pool scrubs, if you have no ECC RAM", but ZFS will still checksum files and compare during normal read activity. If you have ECC memory in your NAS, it effectively eliminates the chance of memory errors resulting in a bad checksum on disk or a bad comparison during read operations. That's the only way. You probably won't find many people that say, "I lost data due to the lack of ECC RAM in my TrueNAS", but anecdotal evidence from the forum posts around ZFS pool loss points in that direction.
        • A3:
        • A4:
          • Because ZFS uses checksums a bitflip during read will result in ZFS incorrectly detecting the data as damaged and attempting to repair it. This repair will succeed unless the parity/redundancy it uses to repair it experiences the same bitflip, in which case ZFS will log an unrecoverable error. In neither case will ZFS replace the data on disk unless the bitflips coincidentally create a valid hash. The odds of this are about 1 in 1-with-80-zeroes-after-it.
        • And lots more.....
      • ECC vs non-ECC RAM and ZFS | TrueNAS Community
        • I've seen many people unfortunately lose their zpools over this topic, so I'm going to try to provide as much detail as possible. If you don't want to read to the end then just go with ECC RAM.
        • For those of you that want to understand just how destructive non-ECC RAM can be, then I'd encourage you to keep reading. Remember, ZFS itself functions entirely inside of system RAM. Normally your hardware RAID controller would do the same function as the ZFS code. And every hardware RAID controller you've ever used that has a cache has ECC cache. The simple reason: they know how important it is to not have a few bits that get stuck from trashing your entire array. The hardware RAID controller(just like ZFS) absolutely NEEDS to trust that the data in RAM is correct.
        • For those that don't want to read, just understand that ECC is one of the legs on your kitchen table, and you've removed that leg because you wanted to reuse old hardware that uses non-ECC RAM. Just buy ECC RAM and trust ZFS. Bad RAM is like your computer having dementia. And just like those old folks homes, you can't go ask them what they forgot. They don't remember, and neither will your computer.
        • A full write up and disccussion.
      • Q re: ECC Ram | TrueNAS Community
        • Q: Is it still recommended to use ECC Ram on a TrueNAS Scale build?
        • A1:
          • Yes. It still uses ZFS file system which benefits from it.
        • A2:
          • It's recommended to use ECC any time you care about your data--TrueNAS or not, CORE or SCALE, ZFS or not. Nothing's changed in this regard, nor is it likely to.
        • A3:
          • One thing people over look is that statistically Non-ECC memory WILL have failures. Okay, perhaps at extremely rare times. However, now that ZFS is protecting billions of petabytes, (okay I don't how much total... just guessing), their are bound to be failures from Non-ECC memory that cause data loss. Or pool loss.
          • Specifically, in memory corruption of an already check-summed block, that ends up being written to disk may be found by ZFS during the next scrub. BUT, in all likely hood that data is lost permanently unless you have unrelated backups. (Backups of corrupt data, simply restores corrupt data...)
          • Then their is the case of not yet check-summed block, that got corrupted. Along comes ZFS to give it a valid checksum and write it to disk. Except ZFS will never detect this as bad during a scrub unless it was metadata that is invalid, (like compression algorithm value not yet assigned), then still data loss. Potentially entire pool lost.
          • This is just for ZFS data, which is most of the movement. However, their are program code and data blocks that could also be corrupted...
          • Are these rare? Of course!!! But, do you want to be a statistic?
  • ASUS BIOS FlashBack
    • To use BIOS FlashBack
      1. Download the firmware for you motherboard paying great attention to the model number
        • ie `PRIME X670-P WIFI BIOS 1654` not `PRIME X670-P BIOS 1654`
      2. Run the 'rename' app to rename the firmware.
      3. Place this firmware in the root of a empty FAT32 formatted USB pendrive. I recommend this pendrive has an access light.
      4. With the computer powered down but still plugged in and the PSU still on, insert the pendrive into the correct BIOS FlashBack USB socket.
      5. Press and hold the FlashBack button for 3 flashes and then let go:
        • Flashing Green LED: the firmware upgrade is active. It will carrry on flashing green utill the flashing is finsihed which will take 8 minutes max and then the light will turn off and stay off. I would leave for 10 minutes to be sure, but mine took 5minutes.. The pendrive will be accessed at regular intervals but not as much as you would think.
        • Solid Green LED: The firmware flashing never started. This is probably because the firmware is the wrong one for your motherboard or the file has not been renamed. With this outcome you can always see the USB drive accessed once by the pendrives activity light (if it has one).
        • RED LED: The firmware update failed during the process.
    • How long is BIOS flashback? - CompuHoy.com
      • How long should BIOS update take? It should take around a minute, maybe 2 minutes. I’d say if it takes more than 5 minutes I’d be worried but I wouldn’t mess with the computer until I go over the 10 minute mark. BIOS sizes are these days 16-32 MB and the write speeds are usually 100 KB/s+ so it should take about 10s per MB or less.
      • This page is loaded with ADs
    • What is BIOS Flashback and How to Use it? | TechLatest - Do you have any doubts regarding BIOS Flashback? No issues, we have got your back. Follow the article till the end to clear doubts regarding BIOS Flashback.
    • [Motherboard] How to use USB BIOS FlashBack? | Official Support | ASUS USA
      • Use situation: If your Motherboard cannot be turned on or the power light is on but not displayed, you can use the USB BIOS FlashBack™ function.

Building the Server

I had dome interesting issues when building my setup and I will list my notes on that journey here.

PC not POSTING [Solved]

After building my PC it does not make any beeps or POST. Sometimes the power light flashes but I can always get into the BIOS on first boot after I have wiped the BIOS.

Things I tried

  • Upgrading the BIOS.
  • Clearing the BIOS with the jumper.
  • Clearing the BIOS with the jumper and then pulling the battery out.

Cause

On the first boot the computer is building a memory profile or even just testing the RAM. I have 128GB RAM in so it takes a lot longer to finish what it is doing.

Solution

Wait for the computer to finish these tests, it is not broken. My PC took 18m55s to POST, so you should wait 20mins.

My board has Q-LED Core which uses the power light to indicate things. If the power light is flahsing or on the computer is alive and you should just wait. Of course you have double checked all of the connections.

After this intial boot the PC will boot up in a normal time (usually under a minute but might be 2-3 depending on your setup).

The boot time will go back to this massive time if you alter any memory settings in the BIOS or indeed, wipe the BIOS. Upgrading the BIOS will also have this affect.

 

Quick Setup Instructions

Build Hardware --> Install TrueNAS --> Configure Settings --> (Create `Storage Pool` --> Create `Data VDEV`) --> Create `Dataset` --> Setup backups --> Check Backups --> check backups --> Setup Virtual Machines --> load files as required --> check backups

  • ZFS does not like a pool to be more than 50% full otherwise it has performance issues.
  • Built into the ZFS spec is a caveat that you do NOT allow your ZVOL to get over 80% in use.
  • Use LZ4 compression for Datasets and ZVols.
  • Use ECC RAM. You don't have to, but it is better for data security although you will  loose a bit of performance (10-15%).
  • TrueNAS minimum required RAM: 8GB
  • If you use an onboard graphics card (iGPU) then the system RAM is nicked for this. Using a discrete graphics card (not onboard) will return the RAM to the system.
  • The password reset on the 'physical terminal` does not like special characters in it. So use a normal password and then change it immediately in the TrueNAS GUI.
  • The screens documentation has a lot of settings explained. Further notes are sometimes hidden under expandable sections.

This is an overview of the setup and you can just fill in the blanks.

  • Buy your kit (and assemble)
    • Large PC case with at least 4 x 5.25" and 1 x 3.5" drive bays.
    • Motherboard - SATA must be hot swappable and enabled
    • RAM - You should run TrueNAS with ECC memory where possible, but it is not a requirement.
    • twin 2.5" drive caddy that fits into a 3.5" drive bay
    • Quad 3.5" drive caddy that fits into 3 x 5.25" drive bays
    • boot drive = 2 x SSD (as raid for redundancy)
    • Long Term Storage / Slow Storage / Magnetic
      • 4 x 3.5" Spinning Disks (HDD)
      • Western Digital
      • CMR only
      • you can use drives with the following sector formats starting with the best:
        1. 4Kn
        2. 512e
        3. 512n
    • Virtual Disks Storage  = 2 x 2TB NVMe
    • Large power supply
  • Make a storage diagram (Enclosure)
    • Take a photo of your tower.
    • Use Paint.NET and add the storage references (sda, sdb, sdc...) to the right location.
    • Save this picture
    • Add this picture to your Dashboard
  • Identify your drive bays
    1. Make an excel file to match your drive serials to the physical locations on your server
    2. Put Stickers on your Enclosure(s)/PC for drive locations
      • Just as it says, print some labels with 1-8 numbers and then stick them on your PC.
  • Configure BIOS
    • Update firmware
    • setup thermal monitoring
    • Enable ECC RAM
      • It needs to be set to `Enabled` in the BIOS, `Auto` is no good.
    • Enable Virtualization Technology
      • Enable
        • Base Virtualization: AMD-V / Intel VMX
        • PCIe passthrough: IOMMU / AMD-Vi / VT-d
      • My ASUS PRIME X670-P WIFI Motherboard BIOS settings:
        • Advanced --> CPU Configuration --> SVM: Enabled
        • Advanced --> PCI Subsystem Settings --> SR-IOV: Disabled
        • Advanced --> CBS --> IOMMU: Enabled
    • Backup BIOS config (if possible) to USB and keep safe.
    • Set BIOS Time (RTC)
  • First POST in the BIOS takes ages (My system does this)
    • Wait 20 mins for the memory profiles to be built and the PC to POST.
    • If your PC POSTs quickly, you don't have to wait.
    • See below for more information.
  • Install and initially configure TrueNAS
    • Install TrueNAS
      • Mirrored on your 2 x Boot Drives
      • Use the `Admin` option and not root.
      • Use a simple password for admin (for now)
    • Login to TrueNAS
    • Set Network globals
      • Network --> Global Configuration --> Settings --> (Hostname | Domain | Primary DNS Server | IPv4 Default Gateway)
    • Set Static IP
      • Network --> Interfaces --> click interface name (i.e. `enp1s0')
      • Untick DHCP
      • Click `Add` button next to Aliases
      • Add your IP in format 10.0.0.x /24
      • Test Changes
      • NB: The process above never works when using a single network adapter, use the console/terminal instead and then reboot.
    • Re-Connect via the hostname instead of the IP
    • Configure the System Settings
      • System Settings --> (GUI | Localization)
      • Go through all of the settings here and set as required.
    • Set/Sync Real Time Clock (RTC)
    • Update TrueNAS
      • System Settings --> Update
    • Make your `admin` password strong
      • Credentials --> Local Users --> admin --> Edit
      • If you have a weak password you need to use a complex one and add it to your password manager.
      • Fill in your email address while you are at it
    • Reconnect to your TrueNAS using the FQDN (optional)
      • This assumes you have all of this setup.
  • Physically install your storage disks
    • Storage --> Disks
    • Have a look at your disks. You should see you 2 x SSD that have been raided for your boot volume that TrueNAS sits on, named `boot-pool` and this pool cannot be used for normal data.
    • If you have NVME disks that are already installed on your motherboard they might be shown.
    • Insert one `Long term storage`disk in to your HDD caddy.
      • Make a note of the serial number.
      • When you put new disks in they will automatically appear.
      • Do them one by one and make a note of their name (sda, sdb, sdc...) and physical location (i.e. the slot you just put it in)
  • Check the location of your System Dataset Pool is where you want it
    • System Settings --> Advanced --> Storage --> Configure --> System Dataset Pool
  • Setting up your first pool
    • Storage --> Create Pool
    • Select all 4 of your `Long term storage` disks and TrueNAS will make a best guess at what configuration you should have, for me it was:
      • Data VDEVs (1 x RAIDZ2 | 4 wide | 465.76 GiB)
      • 4 Disks = RAIDZ2 (2 x data disks, 2 x parity disks = I can loose any 2 disks)
    • Make sure you give it a name.
      • This is not easy to change at a later date so choose wisely.
    • Click `Create` and wait for completion
  • Backup the TrueNAS config
    • System Settings --> General --> Manual Configuration --> Download File
  • -------------------------
  • ------------------------
  • Setting up data sharing
  • Setting up backup for the system and your stored data
  • Configuring virtual machines (VMs) or Apps
  • Settings
    • enable email notifications
  • add truecharts (optional)
  • Install Apps (optional)
  • + 6 things you should do
  • setup nextcloud app + host file paths what are they?
  • Add TrueCharts catalog + takes ages to install, it is not
  • UPS Configuration (i.e shutdown... and monitoriing, it is inbuilt)
  • if you disks are brand new or second hand you should do a burn in test
  • Backups
  • Remote backup (S3)
  • Snapshots
  • Pool Dataset heirarchy
    • MyPoolA
      • Media
      • Virtual_Disks
      • ISOs
      • Backups
      • ...............................
    • SSD1?
    • NVME1?
  • Test RAM
  • disable ipv6

Notes

  • General
    • Backup and Restore TrueNAS Config
      • System Settings --> General --> Manual Configuration --> Download File
      • System Settings --> General --> Manual Configuration --> Upload File
    • Cannot reach update servers
      • Check your RTC  and System Clock are correct.
      • Check you have your gateway and DNS configured properly.
        • Network --> Global configuration
    • Get boot
  • Virtualization
    • General
      • GPU passthrough | TrueNAS Community
        • You need 2 GPUs to do both passthrough and have one available to your container apps. To make it available to VMs for passthrough it isolates the GPU from the rest of the system.
    • Configuring BIOS
    • AMD Virtualization (AMD-V)
      • SVM (Secure Virtual Machine)
        • Base Virtualization
      • SR-IOV (Single Root IO Virtualization Support)
        • It allows different virtual machines in a virtual environment to share a single PCI Express hardware interface.
        • The hardware itself need to support SR-IOV.
        • Very few devices support SR-IOV.
        • Each VM will get it's own containerised instance of the card (shadows).
        • x86 virtualization - Wikipedia
          • In SR-IOV, the most common of these, a host VMM configures supported devices to create and allocate virtual "shadows" of their configuration spaces so that virtual machine guests can directly configure and access such "shadow" device resources.[52] With SR-IOV enabled, virtualized network interfaces are directly accessible to the guests,[53] avoiding involvement of the VMM and resulting in high overall performance
        • Overview of Single Root I/O Virtualization (SR-IOV) - Windows drivers | Microsoft Learn - The SR-IOV interface is an extension to the PCI Express (PCIe) specification.
        • Configure SR-IOV for Hyper-V Virtual Machines on Windows Server | Windows OS Hub
          • SR-IOV (Single Root Input/Output Virtualization) is a host hardware device virtualization technology that allows virtual machines to have direct access to host devices. It can virtualize different types of devices, but most often it is used to virtualize network adapters.
          • In this article, we’ll show you how to enable and configure SR-IOV for virtual machine network adapters on a Windows Hyper-V server.
        • Enable SR-IOV on KVM | VM-Series Deployment Guide
          • Single root I/O virtualization (SR-IOV) allows a single PCIe physical device under a single root port to appear to be multiple separate physical devices to the hypervisor or guest.
          • To enable SR-IOV on a KVM guest, define a pool of virtual function (VF) devices associated with a physical NIC and automatically assign VF devices from the pool to PCI IDs.
        • Enable SR-IOV on KVM | VMWare - To enable SR-IOV on KVM, perform the following steps.
        • Single Root IO Virtualization (SR-IOV) - MLNX_OFED v5.4-1.0.3.0 - NVIDIA Networking Docs
          • Single Root IO Virtualization (SR-IOV) is a technology that allows a physical PCIe device to present itself multiple times through the PCIe bus.
          • This technology enables multiple virtual instances of the device with separate resources.
          • NVIDIA adapters are capable of exposing up to 127 virtual instances (Virtual Functions (VFs) for each port in the NVIDIA ConnectX® family cards. These virtual functions can then be provisioned separately. Each VF can be seen as an additional device connected to the Physical Function. It shares the same resources with the Physical Function, and its number of ports equals those of the Physical Function.
          • SR-IOV is commonly used in conjunction with an SR-IOV enabled hypervisor to provide virtual machines direct hardware access to network resources hence increasing its performance.
            In this chapter we will demonstrate setup and configuration of SR-IOV in a Red Hat Linux environment using ConnectX® VPI adapter cards.
      • IOMMU (AMD-VI ) (VT-d) (Input-Output Memory Management) (PCI Passthrough)
        • An input/output memory management unit (IOMMU) allows guest virtual machines to directly use peripheral devices, such as Ethernet, accelerated graphics cards, and hard-drive controllers, through DMA and interrupt remapping. This is sometimes called PCI Passthrough.
        • It can isolate I/O and memory accesses (from other VMs and the Host system) to prevent DMA attacks on the physical server hardware.
        • There will be a small performance hit using this technology but nothing that will be noticed.
        • IOMMU (Input-output memory management unit) manage I/O and MMU (memory management unit) manage memory access.
        • So long story short, the only way an IOMMU will help you is if you start assigning HW resources directly to the VM.
        • Thoughts dereferenced from the scratchpad noise. | What is IOMMU and how it can be used?
          • Describes, in-depth,  IOMMU, SR-IOV and PCIe passthrough and is well written by a firmware engineer.
          • General
            • IOMMU is a generic name for technologies such as VT-d by Intel, AMD-Vi by AMD, TCE by IBM and SMMU by ARM.
            • First of all, IOMMU has to be initiated by UEFI/BIOS and information about it has to be passed to the kernel in ACPI tables
            • One of the most interesting use cases of IOMMU is PCIe Passthrough. With the help of the IOMMU, it is possible to remap all DMA accesses and interrupts of a device to a guest virtual machine OS address space, by doing so, the host gives up complete control of the device to the guest OS.
            • SR-IOV allows different virtual machines in a virtual environment to share a single PCI Express hardware interface, though very few devices support SR-IOV.
          • Overview
            • The I/O memory management unit (IOMMU) is a type of memory management unit (MMU) that connects a Direct Memory Access (DMA) capable expansion bus to the main memory.
            • It extends the system architecture by adding support for the virtualization of memory addresses used by peripheral devices.
            • Additionally, it provides memory isolation and protection by enabling system software to control which areas of physical memory an I/O device may access.
            • It also helps filter and remap interrupts from peripheral devices
          • Advantages
            • Memory isolation and protection: device can only access memory regions that are mapped for it. Hence faulty and/or malicious devices can’t corrupt memory.
            • Memory isolation allows safe device assignment to a virtual machine without compromising host and other guest OSes.
          • Disadvantages
            • Latency in dynamic DMA mapping, translation overhead penalty.
            • Host software has to maintain in-memory data structures for use by the IOMMU
        • Enable IOMMU or VT-d in your motherboard BIOS - BIOS - Tutorials - InformatiWeb
          • If you want to "pass" the graphics card or other PCI device to a virtual machine by using PCI passthrough, you should enable IOMMU (or Intel VT-d for Intel) in the motherboard BIOS of your server.
          • This technology allows you:
            • to pass a PCI device to a HVM (hardware or virtual machine hardware-assisted virtualization) virtual machine
            • isolate I/O and memory accesses to prevent DMA attacks on the physical server hardware.
        • PCI passthrough with Citrix XenServer 6.5 - Citrix - Tutorials - InformatiWeb Pro
          • Why use this feature ?
            • To use physical devices of the server (USB devices, PCI cards, ...).
            • Thus, the machine is isolated from the system (through virtualization of the machine), but she will have direct access to the PCI device. Then, we realize that the virtual machine has direct access to the PCI device and therefore to the server hardware. This poses a security problem because this virtual machine will have a direct memory access (DMA) to it.
          • How to correct this DMA vulnerability ?
            • It's very simple, just enable the IOMMU (or Intel VT-d) option in the motherboard BIOS. This feature allows the motherboard to "remap" access to hardware and memory, to limit access to the device associated to the virtual machine.
            • In summary, the virtual machine can use the PCI device, but it will not have access to the rest of the server hardware.
            • Note : IOMMU (Input-output memory management unit) manage I/O and MMU (memory management unit) manage memory access.
            • There is a simply graphic that explains things.
          • IOMMU or VT-d is required to use PCI passthrough ?
            • IOMMU is optional but recommended for paravirtualized virtual machines (PV guests)
            • IOMMU is required for HVM (Hardware virtual machine) virtual machines. HVM is identical to the "Hardware-assisted virtualization" technology.
            • IOMMU is required for the VGA passthrough. To use the VGA passthrough, refer to our tutorial : Citrix XenServer - VGA passthrough
        • What is IOMMU? | PeerSpot
          • IOMMU stands for Input-Output Memory Management Unit. It connects i/o devices to the DMA bus the same way processor is connected to the memory via the DMA bus.
          • SR-IOV is different, the peripheral itself must carry the support. The HW knows it's being virtualized and can delegate a HW slice of itself to the VM. Many VMs can talk to an SR-IOV device concurrently with very low overhead.
          • The only thing faster than SR-IOV is PCI passthrough though in that case only one VM can make use of that device, not even the host operating system can use it. PCI passthrough would be useful for say a VM that runs an intense database that would benefit from being attached to a FiberChannel SAN.
          • IOMMU is a component in a memory controller that translates device virtual addresses into physical addresses.
          • The IOMMU’s DMA re-mapping functionality is necessary in order for VMDirectPath I/O to work. DMA transactions sent by the passthrough PCI function carry guest OS physical addresses which must be translated into host physical addresses by the IOMMU.
          • Hardware-assisted I/O MMU virtualization called Intel Virtualization Technology for Directed I/O (VT-d) in Intel processors and AMD I/O Virtualization (AMD-Vi or IOMMU) in AMD processors, is an I/O memory management feature that remaps I/O DMA transfers and device interrupts. This feature (strictly speaking, is a function of the chipset, rather than the CPU) can allow virtual machines to have direct access to hardware I/O devices, such as network cards, storage controllers (HBAs), and GPUs.
        • x86 virtualization - Wikipedia
          • An input/output memory management unit (IOMMU) allows guest virtual machines to directly use peripheral devices, such as Ethernet, accelerated graphics cards, and hard-drive controllers, through DMA and interrupt remapping. This is sometimes called PCI passthrough.
        • virtualbox - What is IOMMU and will it improve my VM performance? - Ask Ubuntu
          • So long story short, the only way an IOMMU will help you is if you start assigning HW resources directly to the VM.
        • Linux virtualization and PCI passthrough | IBM Developer - This article explores the concept of passthrough, discusses its implementation in hypervisors, and details the hypervisors that support this recent innovation.
        • PCI(e) Passthrough - Proxmox VE
          • PCI(e) passthrough is a mechanism to give a virtual machine control over a PCI device from the host. This can have some advantages over using virtualized hardware, for example lower latency, higher performance, or more features (e.g., offloading).
          • But, if you pass through a device to a virtual machine, you cannot use that device anymore on the host or in any other VM.
        • Beginner friendly guide to GPU passthrough on Ubuntu 18.04
          • Beginner friendly guide, on setting up a windows virtual machine for gaming, using VFIO GPU passthrough on Ubuntu 18.04 (including AMD Ryzen hardware selection).
          • Devices connected to the mainboard, are members of (IOMMU) groups – depending on where and how they are connected. It is possible to pass devices into a virtual machine. Passed through devices have nearly bare metal performance when used inside the VM.
          • On the downside, passed through devices are isolated and thus no longer available to the host system. Furthermore it is only possible to isolate all devices of one IOMMU group at the same time. This means, even when not used in the VM if a devices is IOMMU-group sibling of a passed through device, it can not be used on the host system.
        • PCI passthrough via OVMF - Ensuring that the groups are valid | ArchWiki
          • The following script should allow you to see how your various PCI devices are mapped to IOMMU groups. If it does not return anything, you either have not enabled IOMMU support properly or your hardware does not support it.
            This might need changing for TrueNAS.
            #!/bin/bash
            shopt -s nullglob
            for g in $(find /sys/kernel/iommu_groups/* -maxdepth 0 -type d | sort -V); do
                echo "IOMMU Group ${g##*/}:"
                for d in $g/devices/*; do
                    echo -e "\t$(lspci -nns ${d##*/})"
                done;
            done;
          • Example output
            IOMMU Group 1:
            	00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor PCI Express Root Port [8086:0151] (rev 09)
            IOMMU Group 2:
            	00:14.0 USB controller: Intel Corporation 7 Series/C210 Series Chipset Family USB xHCI Host Controller [8086:0e31] (rev 04)
            IOMMU Group 4:
            	00:1a.0 USB controller: Intel Corporation 7 Series/C210 Series Chipset Family USB Enhanced Host Controller #2 [8086:0e2d] (rev 04)
            IOMMU Group 10:
            	00:1d.0 USB controller: Intel Corporation 7 Series/C210 Series Chipset Family USB Enhanced Host Controller #1 [8086:0e26] (rev 04)
            IOMMU Group 13:
            	06:00.0 VGA compatible controller: NVIDIA Corporation GM204 [GeForce GTX 970] [10de:13c2] (rev a1)
            	06:00.1 Audio device: NVIDIA Corporation GM204 High Definition Audio Controller [10de:0fbb] (rev a1)
          • An IOMMU group is the smallest set of physical devices that can be passed to a virtual machine. For instance, in the example above, both the GPU in 06:00.0 and its audio controller in 6:00.1 belong to IOMMU group 13 and can only be passed together. The frontal USB controller, however, has its own group (group 2) which is separate from both the USB expansion controller (group 10) and the rear USB controller (group 4), meaning that any of them could be passed to a virtual machine without affecting the others.
        • PCI Passthrough in TrueNAS (IOMMU / VT-d)
          • PCI nic Passthrough | TrueNAS Community
            • It's usually not possible to pass single ports on dual-port NICs, because they're all downstream of the same PCI host. The error message means the VM wasn't able to grab the PCI path 1/0, as that's in use in the host TrueNAS system. Try a separate PCI NIC, and passing that through, or passing through both ports.
          • PCI Passthrough, choose device | TrueNAS Community
            • Q: I am trying to passthrough a PCI TV Tuner. I choose PCI Passthrough Device, but there's a huge list of devices, but no reference. How to figure out which device is the TV Tuner?
            • A: perhaps you're looking for
              lspci -v
          • Issue with PCIe Passthrough to VM - Scale | TrueNAS Community
            • I am unable to see any of my PCIe devices in the PCIe passthrough selection of the add device window in the vm device manager.
            • I have read a few threads on the forum and can confidently say:
              1. My Intel E52650l-v2 supports VT-d
              2. Virtualization support is enabled in my Asus P9x79 WS
              3. I believe IOMMU is enabled as this is my output:
                dmesg | grep -e DMAR -e IOMMU
                [    0.043001] DMAR: IOMMU enabled
                [    5.918460] AMD-Vi: AMD IOMMUv2 functionality not available on this system - This is not a bug.
            • Does dmesg show that VT-x is enabled? I don't see anything in your board's BIOS settings to enable VT-x.
            • Your CPU is of a generation that according to others (not my area of expertise) has limitations when it comes to virtualization.
          • SOLVED - How to pass through a pcie device such as a network card to VM | TrueNAS Community
            • On your virtual machine, click Devices, then Add, then select the type of PCI Passthru Device, then select the device...
            • lspci may help you to find the device you're looking for in advance.
            • You need the VT-d extension (IOMMU for AMD) for device passthrough in addition to the base virtualization requirement of KVM.
            • How does this come out? I imagine the answer is no output for you, but on a system with IOMMU enabled, you will see a bunch of lines, with this one being the most important to see:
              dmesg | grep -e DMAR -e IOMMU
              [    0.052438] DMAR: IOMMU enabled
            • Solution: I checked the bios and enabled VT-d
          • PCI Passthrough | TrueNAS Community
            • Q: I'm currently attempting to pass through a PCIe USB controller to a VM in TrueNAS core with the aim of attaching my printers to it allowing me to create a print server that I previously had on an M72 mini pc.
            • A:
              • It's pretty much right there in that first post (if you take the w to v correction into account).
              • The missing part at the start is that you run pciconf -lv to see the numbers at the start of that screenshot
              • You take the last 3 numbers from the bit at the beginning of the line and use those with slashes instead of colons between them in the pptdevs entry.
              • from that example:
                xhci0@pci0:1:0:0:
                
                becomes
                
                1/0/0
          • pfSense inside of TrueNAS guide (TrueNAS PCI passthrough) | Reddit
            • Hello everyone, this is my first time posting in here, I just want to make a guide on how to passthrough PCI devices on TrueNAS, because I wasted a lot of time trying a lot of iobhyve codes in TrueNAS shell just to find out that it wont work at all plus there seems to not be a lot of documentation about PCI passthrough on bhyve/FreeNAS/TrueNAS.
            • Having vmm.ko to be preloaded at boot-time in loader.conf.
            • Go to System --> Tunables, add a line and type in "vmm_load" in the Variable, "YES" as the Value and LOADER as Type. Click save
          • Group X is not viable Please ensure all devices within the iommu_group are bound to their vfio bus driver.
            • Issues with IOMMU groups for VM passtrough. | TrueNAS Community
              # Edit
              nano /usr/share/grub/default/grub
              
              # Add
              intel_iommu=on pcie_acs_override=downstream
              
              # To
              GRUB_CMDLINE_LINUX_DEFAULT="quiet"
              
              # Update
              update-grub
              
              # Reboot PC
            • Unable to pass PCIe SATA controller to VM | TrueNAS Community
              • Hi, I am trying to access a group of disks from a former (dead) server in a VM. To this end I have procured a SATA controller and attached the disks to it. I have added the controller to the VM as PCI passthrough. when I try to boot the VM, I get:
                "middlewared.service_exception.CallError: [EFAULT] internal error: qemu unexpectedly closed the monitor: 2023-07-27T23:59:35.560753Z qemu-system-x86_64: -device vfio-pci,host=0000:04:00.0,id=hostdev0,bus=pci.0,addr=0x7: vfio 0000:04:00.0: group 8 is not viable Please ensure all devices within the iommu_group are bound to their vfio bus driver."
              • lspci -v
                04:00.0 SATA controller: ASMedia Technology Inc. Device 1064 (rev 02) (prog-if 01 [AHCI 1.0])
                Subsystem: ZyDAS Technology Corp. Device 2116
                Flags: fast devsel, IRQ 31, IOMMU group 8
                Memory at fcd82000 (32-bit, non-prefetchable) [size=8K]
                Memory at fcd80000 (32-bit, non-prefetchable) [size=8K]
                Expansion ROM at fcd00000 [disabled] [size=512K]
                Capabilities: [40] Power Management version 3
                Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit+
                Capabilities: [80] Express Endpoint, MSI 00
                Capabilities: [100] Advanced Error Reporting
                Capabilities: [130] Secondary PCI Express
                Kernel driver in use: vfio-pci
                Kernel modules: ahci
            • Unable to Pass PCI Device to VM | TrueNAS Community
              • Q:
                • I'm trying to pass through a PCI Intel Network Card to a specific virtual machine. To do that, I:
                  1. confirmed that IOMMU is enabled via:
                     dmesg | grep -e DMAR -e IOMMU
                  2. Identified the PCI device in question using lspci
                  3. Edited the VM and added the PCI device passthrough (having already identified it via lspci) and saved my changes. Attempting to relaunch the VM generates the following error:
                    "[EFAULT] internal error: qemu unexpectedly closed the monitor: 2022-02-17T17:34:27.195899Z qemu-system-x86_64: -device vfio-pci,host=0000:02:00.1,id=hostdev0,bus=pci.0,addr=0x5: vfio 0000:02:00.1: group 15 is not viable Please ensure all devices within the iommu_group are bound to their vfio bus driver."
                • I thought I read on here (maybe it was CORE and not SCALE) that there shouldn't be any manual loading of drivers or modules but it seems like something isn't working correctly here. Any ideas?
              • A1: Why is this error happening
                • As an update in case this helps others - you have to select both PCI addresses with in a given group. In my case, my network adapter was a dual port adapter and I was incorrectly selecting only once PCI address. Going back and adding a second PCI address as a new entry resolved the issue.
                • Yes thats an issue, you can only passthrough full IOMMU groups.
                • @theprez in some cases this is dependent on the PCI devices in question. Like for GPU passthrough, we want to the GPU devices from the host as soon as system boots as otherwise we are not able to do so later when the system has booted. Similarly, in some cases PCI devices which do not have reset mechanism defined - we are unable to properly isolate them from the host on the fly as these devices have different behaviors with some isolating but when we stop the VM, they should be given back to the host but that does not happen whereas for some other devices stopping the VM hangs the VM indefinitely as it did not have a reset mechanism defined.
                • Generally this is not required that you isolate all of the devices in your IOMMU group as the system usually does this automatically but some devices can be picky. We have a suggestion request open which allows you to isolate devices from the host on boot automatically and keep them isolated similar to how system does for GPU devices. However seeing this case, it might be nice if you create a suggestion ticket to somehow perhaps allow isolating all PCI devices in a particular IOMMU group clarifying how you think the feature should work.
              • A2: Identify devices
                • Way 1
                  1. Go to a shell prompt (I use SCALE, so its under System Settings -> Shell) and type in lspci and observe the output.
                  2. If you are able to recognize the device based on the description, make note of the information in the far left (such as 7f:0d.0) as you'll need that for step 3.
                  3. Back under your virtual machine, go to 'Devices --> Add'. For type select PCI pass through device, allow a few moments for the second dropdown to populate. Select the appropriate item that matches what you found in step 2. Note: there may be preceding zeros. So following the same example as I mentioned in step 2, in my case it shows in the drop down menu pci_0000_7f_0d_0. That's the one I selected.
                  4. Change the order if desired, otherwise click save.
                • Way 2
                  1. Observe the console log and insert the desired device (such as a USB drive or other peripheral) and observe what appears in the console.
                  2. In my case it shows a new USB device was found, the vendor of the device, and the PCI slot information.
                    • Take note of this, it's needed for the next step.
                    • In my example, it showed: 00:1a.0
                    • Hint: You can also drop to a shell and run: lspci | grep USB if you're using a USB device.
                  3. Follow Step 3 from Way 1.
                • Note: Some devices require both PCI device IDs to be passed - such as the case of my dual NIC intel card. Had to identity and pass both PCI addresses.
            • nvidia - KVM GPU passthrough: group 15 is not viable. Please ensure all devices within the iommu_group are bound to their vfio bus driver.' - Ask Ubuntu - Not on TrueNAs but might offere some information in some cases.
            • IOMMU Issue with GPU Passthrough to Windows VM | TrueNAS Community
              • I've been attempting to create a Windows VM and pass through a GTX 1070, but I'm running into an issue. The VM runs perfectly fine without the GPU, but fails to boot once I pass through the GPU to the VM. I don't understand what the error message is telling me or how I can resolve the issue.
              • Update: I figured out how to apply the ACS patch, but it didn't work. Is this simply a hardware limitation because of the motherboard's shared PCIe lanes between the two x16 slots? Is this a TrueNAS issue? I'm officially at a loss.
              • This seems to be an issue with IOMMU stuff. You are not the only one.
              • Agreed, this definitely seems like an IOMMU issue. For some reason, the ACS patch doesn't split the IOMMU groups regardless of which modifier I use (downstream, multifunction, and downstream,multifunction). This post captures the same issues I'm having with the same lack of success.
    • Intel Virtualization Technology (VMX)
      • VT-x
        • Base Virtualization
        • virtualization - What is difference between VMX and VT-x? - Super User
          • The CPU flag for Intel Hardware Virtualization is VMX. VT-x is Intel Hardware Virtualization which means they are exactly the same. You change the value of the CPU flag by enabling or disabling VT-x within BIOS. If there isn't an option to enable VT-x within the firmware for your device then it cannot be enabled.
      • VT-d (IOMMU)
      • VT-c (Virtualization Technology for Connectivity)
        • Intel® Virtualization Technology for Connectivity (Intel® VT-c) is a key feature of many Intel® Ethernet Controllers.
        • With I/O virtualization and Quality of Service (QoS) features designed directly into the controller’s silicon, Intel VT-c enables I/O virtualization that transitions the traditional physical network models used in data centers to more efficient virtualized models by providing port partitioning, multiple Rx/Tx queues, and on-controller QoS functionality that can be used in both virtual and non-virtual server deployments.

 


 

Hardware Selection

These links will help you find the kit that suits your needs best.

  • If you are a company, buy the a prebuilt system from iXSystems, do not roll your own.
  • Only use CMR based hard disks when building your NAS with traditional drives.
  • General
    • SCALE Hardware Guide | Documentation Hub
      • Describes the hardware specifications and system component recommendations for custom TrueNAS SCALE deployment.
      • From repurposed systems to highly custom builds, the fundamental freedom of TrueNAS is the ability to run it on almost any x86 computer.
      • This is a definite read before purchasing your hardware.
    • TrueNAS Mini - Enterprise-Grade Storage Solution for Businesses
      • TrueNAS Mini is a powerful, enterprise-grade storage solution for SOHO and businesses. Get more out of your storage with the TrueNAS Mini today.
      • TrueNAS Minis come standard with Western Digital Red Plus hard drives, which are especially suited for NAS workloads and offer an excellent balance of reliability, performance, noise-reduction, and power efficiency.*
      • Regardless of which drives you use for your system, purchase drives with traditional CMR technology and avoid those that use SMR technology.
      • (Optional) Boost performance by adding a dedicated, high-performance read cache (L2ARC) or by adding a dedicated, high-performance write cache (ZIL/SLOG)
        • I dont need this, but it is there if needed.
  • Other People's Setups
    • My crazy new Storage Server with TrueNAS Scale - YouTube | Christian Lempa
      • In this video, I show you my new storage server that I have installed with TrueNAS Scale. We talk about the hardware parts and things you need to consider, and how I've used the software on this storage build.
      • A very detailed video, watch before you purchase hardware.
      • Use ECC memory
      • He istalled 64GB, but he has a file cache configured.
      • Dont buy a chip with IGP, they dont tend to support ECC memory.
    • ZFS / TrueNAS Best Practices? - #5 by jode - Open Source & Web-Based - Level1Techs Forums - You hint at a very diverse set of storage requirements that benefit from tuning and proper storage selection. You will find a lot of passionate zfs fans because zfs allows very detailed tuning to different workloads, often even within a single storage pool. Let me start to translate your use cases into proper technical requirements for review and discussion. Then I’ll propose solutions again for discussion.
  • RAM
    • All TrueNAS hardware from iXsystems comes with ECC RAM.
    • ECC RAM - SCALE Hardware Guide | Documentation Hub
      • Electrical or magnetic interference inside a computer system can cause a spontaneous flip of a single bit of RAM to the opposite state, resulting in a memory error. Memory errors can cause security vulnerabilities, crashes, transcription errors, lost transactions, and corrupted or lost data. So RAM, the temporary data storage location, is one of the most vital areas for preventing data loss.
      • Error-correcting code or ECC RAM detects and corrects in-memory bit errors as they occur. If errors are severe enough to be uncorrectable, ECC memory causes the system to hang (become unresponsive) rather than continue with errored bits. For ZFS and TrueNAS, this behaviour virtually eliminates any chances that RAM errors pass to the drives to cause corruption of the ZFS pools or file errors.
      • To summarize the lengthy, Internet-wide debate on whether to use error-correcting code (ECC) system memory with OpenZFS and TrueNAS: Most users strongly recommend ECC RAM as another data integrity defense.
      • However:
        • Some CPUs or motherboards support ECC RAM but not all
        • Many TrueNAS systems operate every day without ECC RAM
        • RAM of any type or grade can fail and cause data loss
        • RAM failures usually occur in the first three months, so test all RAM before deployment.
    • TrueNAS on system without ECC RAM vs other NAS OS | TrueNAS Community
      • If you care about your data, intend for the NAS to be up 24x365, last for >4 years, then ECC is highly recommended.
      • ZFS is like any other file systems, send corrupt data to the disks, and you have corruption that can't be fixed. People say "But, wait, I can FSCK my EXT3 file system". Sure you can, and it will likeky remove the corruption and any data associated with that corruption. That's data loss.
      • However, with ZFS you can't "fix" a corrupt pool. It has to be rebuilt from scratch, and likely restored from backups. So, some people consider that too extreme and use ECC. Or don't use ZFS.
      • All that said. ZFS does do something that other file systems don't. In addition to any redundancy, (RAID-Zx or Mirroring), ZFS stores 2 copies of metadata and 3 copies of critical metadata. That means if 1 block of metadata is both corrupt AND that ZFS can detect that corruption, (no certainty), ZFS will use another copy of metadata. Then fix the broken metadata block(s).
    • OpenMediaVault vs. TrueNAS (FreeNAS) in 2023 - WunderTech
      • Another highly debated discussion is the use of ECC memory with ZFS. Without diving too far into this, ECC memory detects and corrects memory errors, while non-ECC memory doesn’t. This is a huge benefit, as ECC memory shouldn’t write any errors to the disk. Many feel that this is a requirement for ZFS, and thus feel like ECC memory is a requirement for TrueNAS. I’m pointing this out because hardware options are minimal for ECC memory – at least when compared to non-ECC memory.
      • The counterpoint to this is argument is that ECC memory helps all filesystems. The question you’ll need to answer is if you want to run ECC memory with TrueNAS because if you do, you’ll need to ensure that your hardware supports it.
      • On a personal level, I don’t run TrueNAS without ECC memory, but that’s not to say that you must. This is a huge difference between OpenMediaVault and TrueNAS and you must consider it when comparing these NAS operating systems
      • = you should run TrueNAS with ECC memory where possible
    • How Much Memory Does ZFS Need and Does It Have To Be ECC? - YouTube | Lawrence Systems
      • You do not need a lot of memory for ZFS but if you do use lots of memory you're going to get beeter performance out of ZFS (i.e cache)
      • Using ECC memory is better but it is not a requirement. Tom uses ECC as shown on his TrueNAS servers.
  • Drive Bays
  • Storage Controllers
    • Don't use a RAID card for TrueNAS, use a HBA if you need extra drives.
    • How to identify HDD location | TrueNAS Community
      • You're using the wrong type of storage attachment. That's a RAID card, which means TrueNAS has no direct access to the disks and can't even see the serial numbers.
      • You need an HBA card instead if you want to protect your data. Back it all up now and get that sorted before doing anything else.
    • What's all the noise about HBA's, and why can't I use a RAID controller? | TrueNAS Community
      • An HBA is a Host Bus Adapter.
      • This is a controller that allows SAS and SATA devices to be attached to, and communicate directly with, a server.
      • RAID controllers typically aggregate several disks into a Virtual Disk abstraction of some sort, and even in "JBOD" or "HBA mode" generally hide the physical device.
  • Drives
    This is my TLDR:
    • General
      • You cannot change the Physical Sector size of any drive.
      • Solid State drives do not have physical sectors as they do not have platters. The LBA is all handled internally with the Solid State drive. This means that changing a Solid State drive from 512e to 4Kn will potentially have a minimal performance increase with ZFS (ashift=12) but might be useful for NTFS whoes default cluster size is 4096B.
    • HDD (SATA Spinning Disks)
      • They come in a variety of Sector size configurations
        • 512n (512B Logical / 512B Physical)
        • 512e (512B Logical / 4096B Physical)
          • The 512e drive benefits from 4096B physical sectors whilst being able to emulate a 512 Logical sector for legacy OS.
        • 4096Kn (4096B Logical / 4096B Physical)
          • The 4Kn drives are faster because their larger sector size required less checksum data to be stored and read (512n = 8 checksum, 4Kn = 1 checksum).
        • Custom Logical
        • There are very few of these disks that allow you to set custom logical sector sizes, but a quite a few that allow you to switch between 512e and 4Kn modes (usually NAS and professional drives).
        • Hot-swappable drives
    • SSD (SATA)
      • They are Solid State
      • Most if not all SSDs are 512n
      • A lot quicker that Spinning Disks
      • Hot-swappable drives
    • SAS
      • They come in Spinning Disk and Solid State.
      • Becasue of the enviroment that these drives are going in, most of they have configurable Logical Sector sizes.
      • Used mainly in Data Farms.
      • The connector will allow SATA drives to be connected.
      • I think SAS drives have Multi I/O unike SATA but similiar to NVMe.
      • Hot-swappable drives
    • NVMe
      • A lot of these drives come as 512n. I have seen a few that allow you to switch from 512e to 4Kn and back and this does vary from manufacturer to manufacturer. The difference in the modes will not have a huge difference in performance.
      • These drives need direct connection to the PCI Bus via PCI Lanes, usually 3 or 4.
      • They can get quite hot.
      • Can do multiple read and writes at the same time due to the mutliple PCI Lanes they are connected to.
      • A lot quicker that SSD.
      • Cannot hotswap drives.
    • U.2
      • This is more a connection standard rather than a new type of drive.
      • I would avoid this technology not because it is bad, but becasue U.3 is a lot better.
      • Hot-swappable drives (SATA/SAS only)
      • The end points (i.e. drive bays) need to be preset to either SATA/SAS or NVMe.
    • U.3 (Buy this kit when it is cheap enough)
      • This is more a connection standard rather than a new type of drive.
      • This is a revision of the U.2 standard and is where all drives will be moving to in the near future.
      • Hot-swappable drives (SATA/SAS/NVMe)
      • The same connector can accept SATA/SAS/NVMe without having to preset the drive type. This allows easy mix and matching using the same drive bays.
      • Can support SAS/SATA/NVMe drives all on the same form factor and socket which means one drive bay and socket type for them all. Adpaters are easy to get.
      • Will require a Tri-mode controller card.
    • General
      • You should use 4kn drives on ZFS as 4096 blocks are the smallest size TrueNAS will write (ashift=12).
      • If your drive supports 4Kn, you should set it to this mode. It is better for performance, and if it was not, they would not of made it.
      • 512e drives are ok and should be fine for most peoples how network.
      • In Linux `Sata 0` is referred to as `sda`
      • Error on a disk | TrueNAS Community
        • There's no need for drives to be identical, or even similar, although any vdev will obviously be limited by its least performing member.
        • Note, though that WD drives are merely marketed as "5400 rpm-class", whatever that means, and actually spin at 7200 rpm.
      • U.2 and NVMe - To speed up the PC performance | Delock - Sopme nice diagrams and explanations.
      • SAS vs SATA - Difference and Comparison | Diffen - SATA and SAS connectors are used to hook up computer components, such as hard drives or media drives, to motherboards. SAS-based hard drives are faster and more reliable than SATA-based hard drives, but SATA drives have a much larger storage capacity. Speedy, reliable SAS drives are typically used for servers while SATA drives are cheaper and used for personal computing.
    • What Drives shoud I to use?
      • Don't use (Pen drives / Thumb Drives / USB sticks / USB hard drives) for storage or your boot drive either.
      • Use CMR HDD drives, SSD, NVMe for storage and boot.
      • Update: WD Red SMR Drive Compatibility with ZFS | TrueNAS Community
        • Thanks to the FreeNAS community, we uncovered and reported on a ZFS compatibility issue with some capacities (6TB and under) of WD Red drives that use SMR (Shingled Magnetic Recording) technology. Most HDDs use CMR (Conventional Magnetic Recording) technology which works well with ZFS. Below is an update on the findings and some technical advice.
        • WD Red TM Pro drives are CMR based and designed for higher intensity workloads. These work well with ZFS, FreeNAS, and TrueNAS.​
        • WD Red TM Plus is now used to identify WD drives based on CMR technology. These work well with ZFS, FreeNAS, and TrueNAS.​
        • WD Red TM is now being used to identify WD drives using SMR, or more specifically, DM-SMR (Device-Managed Shingled Magnetic Recording). These do not work well with ZFS and should be avoided to minimize risk.​
        • There is an excellent SMR Community forum post (thanks to Yorick) that identifies SMR drives from Western Digital and other vendors. The latest TrueCommand release also identifies and alerts on all WD Red DM-SMR drives.
        • The new TrueNAS Minis only use WD Red Plus (CMR) HDDs ranging from 2-14TB. Western Digital’s WD Red Plus hard drives are used due to their low power/acoustic footprint and cost-effectiveness. They are also a popular choice among FreeNAS community members building systems of up to 8 drives.
        • WD Red Plus is the one of the most popular drives the FreeNAS community use.
    • CMR vs SMR
      • List of known SMR drives | TrueNAS Community - This explains some of the differences of `SMR vs CMR` along with a list of some drives
      • Device-Managed Shingled Magnetic Recording (DMSMR) - Western Digital - Find out everything you want to know about how Device-Managed SMR (DMSMR) works.
      • List of known SMR drives | TrueNAS Community
        • Hard drives that write data in overlapping, "shingled" tracks, have greater areal density than ones that do not. For cost and capacity reasons, manufacturers are increasingly moving to SMR, Shingled Magnetic Recording. SMR is a form of PMR (Perpendicular Magnetic Recording). The tracks are perpendicular, they are also shingled - layered - on top of each other. This table will use CMR (Conventional Magnetic Recording) to mean "PMR without the use of shingling".
        • SMR allows vendors to offer higher capacity without the need to fundamentally change the underlying recording technology.
          New technology such as HAMR (Heat Assisted Magnetic Recording) can be used with or without shingling. The first drives are expected in 2020, in either flavor.
        • SMR is well suited for high-capacity, low-cost use where writes are few and reads are many.
        • SMR has worse sustained write performance than CMR, which can cause severe issues during resilver or other write-intensive operations, up to and including failure of that resilver. It is often desirable to choose a CMR drive instead. This thread attempts to pull together known SMR drives, and the sources for that information.
        • There are three types of SMR:
          1. Drive Managed, DM-SMR, which is opaque to the OS. This means ZFS cannot "target" writes, and is the worst type for ZFS use. As a rule of thumb, avoid DM-SMR drives, unless you have a specific use case where the increased resilver time (a week or longer) is acceptable, and you know the drive will function for ZFS during resilver. See (h)
          2. Host Aware, HA-SMR, which is designed to give ZFS insight into the SMR process. Note that ZFS code to use HA-SMR does not appear to exist. Without that code, a HA-SMR drive behaves like a DM-SMR drive where ZFS is concerned.
          3. Host Managed, HM-SMR, which is not backwards compatible and requires ZFS to manage the SMR process.
        • I am assuming ZFS does not currently handle HA-ZFS or HM-ZFS drives, as this would require Block Pointer Rewrite. See page 24 of (d) as well as (i) and (j).
      • Western Digital implies WD Red NAS SMR drive users are responsible for overuse problems – Blocks and Files
        • Has some excellent diagrams showing what is happening on the platters.
    • Western Digital
    • NVMe (SGFF)/U.2/U.3 - The way forward

General TrueNAS Notes

Particular pages I found useful. The TrueNAS Documentation Hub has excellent tutorials and information. For some things you have to refer to the TrueNAS CORE documentation as it is more complete.

  • TrueNAS as an APP
    1. Browse to your TrueNAS server with your Mobile Phone or Tablet
    2. Bring up the browser menu and click on "Add to Home Screen"
    3. Click Add
    4. You now have TrueNAS as an APP on your mobile device.
  • Websites
  • Reviews
    • TrueNAS Software Review – NAS Compares
      • Have you been considering a NAS for a few years, but looked at the price tag that off the shelf featured solutions from Synology or QNAP and thought “wow, that seems rather expensive for THAT hardware”? Or are you someone that wants a NAS, but also has an old PC system or components around that could go towards building one? Or perhaps you are a user who wants a NAS, but HAS the budget, HAS the hardware, but also HAS the technical knowledge to understand EXACTLY the system setup, services and storage configuration you need? If you fall into one of those three categories, then there is a good chance that you have considered TrueNAS (formally FreeNAS).
      • This is a massive review of TrueNAS CORE and is a must read.
  • SCALE vs CORE vs Enterprise vs Others
  • TrueCommand
  • Setup Tutorials
    • How to setup TrueNAS, free NAS operating system - How to setup TrueNAS - detailed step-by-step guide on how setup TrueNAS system on a Windows PC and use it for storing data.
    • How to setup your own NAS server | TechRadar - OpenMediaVault helps you DIY your way to a robust, secure, and extensive NAS device
    • Getting Started with TrueNAS Scale | Part 1 | Hardware, Installation and Initial Configuration - Wikis & How-to Guides - Level1Techs Forums - This Guide will be the first in a series of Wikis to get you started with TrueNAS Scale. In this Wiki, you’ll learn everything you need to get from zero to being ready for setting up your first storage pool. Hardware Recommendations The Following Specifications are what I would personally recommend for a reasonable minimum of a Server that will run in (Home) Production 24/7. If you’re just experimenting with TrueNAS, less will be sufficient and it is even possible to do so in a Virtual Machine.
    • 6 Crucial Settings to Enable on TrueNAS SCALE - YouTube
      • This video goes over many common settings (automations) that I highly recommend ever user enables when setting up TrueNAS SCALE or even TrueNAS CORE.
      • The 6 things:
        • Backup system dataset
        • HDD Smart Tests
        • HDD Long Tests
        • Pool Scrubs
          • Running this often prevent pool/file corruption.
          • Goes through/reads every single file on the pool and makes sure they don't have any errors by checking their checksums and if it there is no bit rot or corruption found, then TrueNAS knows the pool is ok.
          • If file errors are found, TrueNAS to fixes them without prompting as long as the file is not too corrupt.
          • You want to run them fairly often is because if you have too many errors stacking because it can only repair so many errors and it might be a sign of a failing drive.
        • Snapshots and scheduling them.
          • Setting up periodic snapshots prevents malware ransomware from robbing you of your data.
        • TrueNAS backup
          • RSync (a lot of endpoints)
          • Cloud Sync (any cloud provider)
          • Replication (to another TrueNAS box)
          • Check you can restore backups at least every 6 months or more often depending on the data you keep.
    • Getting Started With TrueNAS Scale Beta - YouTube | Lawrence Systems - A short video on how to start with TrueNAS SCALE but with an emphasis on moving from TrueNAS CORE.
    • TrueNAS Scale - Linux based NAS with Docker based Application Add-ons using Kubernetes and Helm. - YouTube | Awesome Open Source
      • TrueNAS is a name you should know. Maybe you know it as FreeNAS, but it's been TrueNAS core for a while now. It is BSD based, and solid as afar as the NAS systems go. But now, they've started making a bold move to bring us this great NAS system in Linux form. Using Docker and Helm as the basis of their add-ons they have taken what was already an amazing, open source project, and given it new life. The Dockere eco-system, even in the early alpha / beta stages has added so much to this amazing NAS!
      • This video is relatively old but it does show the whole procedure to from intially setting up TrueNAS SCALE to installing apps.
  • Settings
  • Misc
    • Check Linux Block Device with Examples - howtouselinux - A block device is a storage device that moves data in sequences of bytes or bits (blocks). These devices support random access and generally use buffered I/O. Examples include hard disks, CD-ROM drives, and flash drives. A block device can be physically attached to a computer or accessed remotely as if it were physically attached to the computer.
  • Errors
    • Username or password is wrong even though I know my password.
      • When setting up TrueNAS, do not use # symbols in the password, it does not like it.
      • `admin` is the GUI user unless you choose to use `root`
      • You can use the # symbol in your password when you change the `admin` account password from the GUI
      • So you should use a simple password on setup and then change it in the GUI after your TrueNAS is setup.
  • 'My' Pool Naming convention
    1. You can use: (cartoon characters|Movie characters|planets|animalsconstallations|Types of Fraggle|Muppet names): eg: you can choose large animals for storage, (smaller|faster) animals for NVMe etc.
    2. Should not be short or and ordinary word so you are at less risk of making a mistake on the CLI.
    3. Start with a capital letter, again so you are at less risk of making a mistake on the CLI.
    4. (optional) it should be almost descriptionve of what the pool does i.e. `sloth` for slow drives.
    5. It should be a single word.
    6. Examples:
      • Fast/Mag = too short
      • WCoyote + RoadRunner = almost but the double words will be awkward to type all the time.
      • Lion/Cat/Kitten = Cat is could be mistaken for a Linux command and is too short.
    7. Some other opinions

ZFS / Managing Storage (This is the knowledge section.....)

  • This is my overview of ZFS
    • ZFS - is more than a file system
    • Pool - A grouping of one or more VDEVs and it is this is mounted in (eg /mnt/Magnetic_Storage).
    • VDEV - A virtual devices that controls one ore more assigned hard drives in a defined topology/role and they are present on in the Pool.
    • Dataset - These define file system containers on the storage pool in a hierarchical structure.
    • ZVol - A block level device allowing the harddrives to be accessed directly with minimal interaction with the hypervisor. These are used primarily for virtual hard disks.
  • Example Commands
    • A small collection of ZFS Commands
      # Manual/Documentation = Output the commands helpfile
      man <command>  
      man zfs
      man zfs send
      
      # Shows all ZFS mounts, not Linux mounts.
      zfs mount
      
      # Show asset information
      zfs list
      zfs list -o name,quota,refquota,reservation,refreservation
      zfs get all rpool/data1
      zfs get used,referenced,reservation,volsize,volblocksize,refreservation,usedbyrefreservation MyPoolA/Virtual_Disks/roadrunner
      
      # Get pool ashift value
      zpool get ashift MyPoolA
  • TrueNAS Tutorials
    • Setting Up Storage | Documentation hub
      • Provides basic instructions for setting up your first storage pool and dataset or zvol.
      • The root dataset of the first pool you create automatically becomes the system dataset.
    • To view storage errors, start here:
      • Storage -->
    • Importing Data | Documentation Hub
      • Provides instructions for importing data (from a disk) and monitoring the import progress.
      • Importing is a one-time procedure that copies data (from a physical disk) into a TrueNAS dataset.
      • TrueNAS can only import one disk at a time, and you must install or physically connect it to the TrueNAS system.
      • Supports the following filesystems
        • UFS
        • NTFS
        • MSDOSFS
        • EXT2FS
        • EXT3 (partially)
        • EXT4 (Partially)
  • ZFS General
    • Make sure your drives all have the same sector size. Preferable 4096Bytes/4KB/4Kn. ZFS smallest writes are 4K. Do not use drives with different sector sizes on ZFS, this is bad.
    • Built into the ZFS spec is a caveat that you do NOT allow your ZVOL to get over 80% in use.
    • A ZVol is block storage, while Datasets are file-based. (this is a very simplistic explanation)
    • ZFS for Dummies · Victor's Blog
      • A ZFS cheat sheet for beginners with graphics.
      • Most if not all of the commands are explained. Mount and umnount are an example.
    • ZFS Cheat Sheet - Matt That IT Guy - This isn’t supposed to be an all encompassing guide, but rather more of a taste of what can be done without going down the rabbit hole.
    • What Do All These Terms Mean? - TrueNAS OpenZFS Dictionary | TrueNAS
      • If you are new to TrueNAS and OpenZFS, its operations and terms may be a little different than those used by other storage providers. We frequently get asked for the description of an OpenZFS term or how TrueNAS technology compares to other technologies.
      • This blog post addresses the most commonly requested OpenZFS definitions.
    • TrueNAS Storage Primer on ZFS for Data Storage Professionals | TrueNAS
      • New to TrueNAS and OpenZFS? Their operations and terms may be a little different for you. The purpose of this blog post is to provide a basic guide on how OpenZFS works for storage and to review some of the terms and definitions used to describe storage activities on OpenZFS.
      • This is agreat overview of OpenZFS
      • Has a diagram showing the hierarchy.
      • This is an excellent overview and description and is a good place to start.
    • 20. ZFS Primer — TrueNAS®11.3-U5 User Guide Table of Contents - An overview of the features provided by ZFS.
    • ZFS Configuration Part 2: ZVols, LZ4, ARC, and ZILs Explained - The Passthrough POST - In our last article, we touched upon configuration and basic usage of ZFS. We showed ZFS’s utility including snapshots, clones, datasets, and much more. ZFS includes many more advanced features, such as ZVols and ARC. This article will attempt to explain their usefulness as well.
    • What is ZFS? Why are People Crazy About it?
      • Today, we will take a look at ZFS, an advanced file system. We will discuss where it came from, what it is, and why it is so popular among techies and enterprise.
      • Unlike most files systems, ZFS combines the features of a file system and a volume manager. This means that unlike other file systems, ZFS can create a file system that spans across a series of drives or a pool. Not only that but you can add storage to a pool by adding another drive. ZFS will handle partitioning and formatting.
    • ZFS - Wikipedia
    • ZFS - Debian Wiki
    • ZFS Best Practices Guide (PDF) | solarisinternals.com
    • OpenZFS - openSUSE Wiki
      • ZFS is a combined file system and logical volume manager designed by Sun Microsystems. The features of ZFS include protection against data corruption, support for high storage capacities, integration of the concepts of filesystem and volume management, snapshots and copy-on-write clones, continuous integrity checking and automatic repair, RAID-Z and native NFSv4 ACLs, and can be very precisely configured. The two main implementations, by Oracle and by the OpenZFS project, are extremely similar, making ZFS widely available within Unix-like systems.
    • Oracle Solaris ZFS Administration Guide - This book is intended for anyone responsible for setting up and administering Oracle ZFS file systems. Topics are described for both SPARC and x86 based systems, where appropriate.
    • Introducing ZFS Properties - Oracle Solaris Administration: ZFS File Systems - This book is intended for anyone responsible for setting up and administering Oracle ZFS file systems. Topics are described for both SPARC and x86 based systems, where appropriate.
    • Chapter 22. The Z File System (ZFS) | FreeBSD Documentation Portal - ZFS is an advanced file system designed to solve major problems found in previous storage subsystem software
    • ZFS on Linux - Proxmox VE - An overview of the features of ZFS.
    • ZFS 101—Understanding ZFS storage and performance | Ars Technica - Learn to get the most out of your ZFS filesystem in our new series on storage fundamentals.
    • OpenZFS
    • Kernel/Reference/ZFS - Ubuntu Wiki
    • ZFS Administration | SCALE 11x - Presentation with PDF from Aaron Toponce.
    • ZFS Cheat Sheet - Matt That IT Guy - In a similar fashion to this VMware & Cisco cheat sheet that I wrote up not too long ago, I figure I would do a similar one for ZFS. Note that this isn’t supposed to be an all encompassing guide, but rather more of a taste of what can be done without going down the rabbit hole.
    • Getting Started with TrueNAS Scale | Part 2 | Learning ZFS Storage in TrueNAS; Creating a Pool, Dataset and Snapshot Task - Wikis & How-to Guides - Level1Techs Forums - This builds on the first wiki in this series, which you can find here. After having installed and configured the Basics of TrueNAS Scale, we’ll learn about Storage Pools, VDEVs and Datasets to configure our First Pool and a Custom Dataset. A Snapshot Task will be created as well.
    • TrueNAS ZFS VDEV Pool Design Explained: RAIDZ RAIDZ2 RAIDZ3 Capacity, Integrity, and Performance - YouTube | Lawrence Systems
      • When setting up ZFS pools performance, capacity and data integrity all have to be balanced based on your needs and budget. It’s not an easy decision to make so I wanted to post some references here to help you make a more informed decision.
    • ZFS 101: Leveraging Datasets and Zvols for Better Data Management - YouTube | Lawrence Systems
      • Excellent video on datasets and ZVol
      • ZFS Datasets are more like enhanced directories with a few enhaced features and why they are different to directories and how they are important to your structure and one you should be using them.
      • We will also talk about z-vol and how they function as a virtual block device within the ZFS environment.
      • Datasets and ZVOL live within an individual ZFS Pool
      • ZVOL
        • ZVOL is short for `ZFS Volume` and is a virtual block device within your ZFS storage pool.
        • ZFS Volume is the virtual block device within you ZFS pool adn this virtual block device you can think of as hard drive presenting as a virtual block device.
        • ZVol can be setup `Sparse` which means `Thick` or `Thin` provisioned
          • Thick Provisioned = Pre-Assign all disk space (= VirtualBox Fixed disk size)
          • Thin Provisioned = Only assign used space (= VirtualBox Dynamic disk size) (Sparse On ?)
        • Primary Use Cases of Zvol
          • Local Virtual machine block device (hard drive) for virtualization inside of TrueNAS
          • iSCSI storage targets that can be used for any applications that use iSCSI
        • ZVol do not present to the file system, you can only see them in the GUI
      • iSCSI
        • IP based hardrive. It presents as a hard drive so remote OS windows, linux and other OS can use as such.
        • Tom touches briefly on iSCSI and how it uses it for his PC games and how to set it up.
      • Datasets
        • Datasets can be nested as directories in other datasets.
        • He uses  the name `Virtual_Disks` for his virtual machines, but also their is a `ISO_Storage` folder for his ISOs in that dataset.
        • There is a `Primary dataset` which everything elses gets nested under.
        • Different Datasets are better that different folders because you can put different policies on the datasets.
        • Tom puts all apps under a dataset called `TrueCharts` and then each app has its own datasetup = makes sense (also becasue enxtcloud has files aswell, he calls the data set `Nextcloud_Database`
    • Introduction to ZFS (pdf) | TrueNAS Community - This is a short introduction to ZFS. It is really only intended to convey the bare minimum knowledge needed to start diving into ZFS and is in no way meant to cut Michael W. Lucas' and Allan Jude's book income. It is a bit of a spiritual successor to Cyberjock's presentation, but streamlined and focused on ZFS, leaving other topics to other documents.
    • ZFS tuning cheat sheet – JRS Systems: the blog
      • Quick and dirty cheat sheet for anyone getting ready to set up a new ZFS pool. Here are all the settings you’ll want to think about, and the values I think you’ll probably want to use.
      • Has all the major terms explains simply.
    • ZFS 101—Understanding ZFS storage and performance | Ars Technica - Learn to get the most out of your ZFS filesystem in our new series on storage fundamentals.
    • RAIDZ Types Reference
      • RAIDZ levels reference covers various aspects and tradeoffs of the different RAIDZ levels.
      • brilliant and simple diagrams of different RAIDZ.
    • An Introduction to ZFS A Place to Start - ServeTheHome
      • In this article, Nick gives an introduction to ZFS which is a good place to start for the novice user who is contemplating ZFS on Linux or TrueNAS.
      • Excellent article.
    • ZFS for Newbies - YouTube | EuroBSDcon
      • Dan Langille thinks ZFS is the best thing to happen to filesystems since he stopped using floppy disks. ZFS can simplify so many things and lets you do things you could not do before. If you’re not using ZFS already, this entry-level talk will introduce you to the basics.
      • This talk is designed to get you interested in ZFS and see the potential for making your your data safer and your sysadmin duties lighter. If you come away with half the enthusiasm for ZFS that Dan has, you’ll really enjoy ZFS and appreciate how much easier it makes every-day tasks.
      • Things we will cover include:
        • a short history of the origins
        • an overview of how ZFS works
        • replacing a failed drive
        • why you don’t want a RAID card
        • scalability
        • data integrity (detection of file corruption)
        • why you’ll love snapshots
        • sending of filesystems to remote servers
        • creating a mirror
        • how to create a ZFS array with multiple drives which can lose up to 3 drives without loss of data.
        • mounting datasets anywhere in other datasets
        • using zfs to save your current install before upgrading it
        • simple recommendations for ZFS arrays
        • why single drive ZFS is better than no ZFS
        • no, you don’t need ECC
        • quotas
        • monitoring ZFS
    • A detailed guide to TrueNAS and OpenZFS | Jason Rose
      • This guide is not intended to replace the official TrueNAS or OpenZFS documentation. It will not provide explicit instructions on how to create a pool, dataset, or share, nor will it exhaustively document everything TrueNAS and OpenZFS have to offer. Instead, it's meant to supplement the official docs by offering additional context around the huge range of features that TrueNAS and OpenZFS support.
      • Also covers various aspects of hardware inlcuding a brilliant explanation of ECC RAM, not required, but better to have it.
    • ZFS Tuning Recommendations | High Availability - Guide to tuning and optimising a ZFS file system.
    • XFS vs ZFS vs Linux Raid - ServerMania - What is the difference between XFS vs ZFS and Linux Raid (Redundant Array of Independent Disks)? We explain the difference with examples here.
    • The path to success for block storage | TrueNAS Community - ZFS does two different things very well. One is storage of large sequentially-written files, such as archives, logs, or data files, where the file does not have the middle bits modified after creation. The other is storage of small, randomly written and randomly read data.
  • Compression on ZVols, Datasets and Free Space
    • Is the ZFS compression good thing or not to save space on backup disk on TrueNAS? | TrueNAS Community
      • LZ4 is on by default, it has a negligible performance impact and will compress anything that can be.
    • Available Space difference from FreeNAS and VMware | TrueNAS Community
      • You don't have any business trying to use all the space. ZFS is a copy on write filesystem, and needs significant amounts of space free in order to keep performing at acceptable levels. Your pool should probably never be filled more than 50% if you want ESXi to continue to like your FreeNAS ZFS datastore.
      • So. Moving on. Compression is ABSOLUTELY a great idea. First, a compressed block will transfer from disk more quickly, and CPU decompression is gobs faster than SATA/SAS transfer of a larger sized uncompressed block of data. Second, compression increases the pool free space. Since ZFS write performance is loosely tied to the pool occupancy rate, having more free space tends to increase write performance.
      • Well, ZFS won't be super happy at 50-60%. Over time, what happens is that fragmentation increases on the pool and the ability of ZFS to rapidly find contiguous ranges of free space drops, which impacts write performance. You won't see this right away... some people fill their pool to 80% and say "oh speeds are great, I'll just do this then" but then as time passes and they do a lot of writes to their pool, the performance falls like a rock, because fragmentation has increased. ZFS fools you at first because it can be VERY fast even out to 95% the first time around.
      • Over time, there is more or less a bottom to where performance falls to. If you're not doing a lot of pool writes, you won't get there. If you are, you'll eventually get there. So the guys at Delphix actually took a single disk and tested this, and came up with what follows:
      • An excelent diagram of %Pool Full vs. Steady State Throughput
    • VM's using LZ4 compression - don't? | Reddit
      • After fighting and fighting to get any sort of stability out of my VM's running on ZFS I found the only way to get them to run with any useful level of performance I had to disable LZ4 compression. Performance went from 1 minutes to boot to 5 seconds, and doing generic things such as catting a log file would take many seconds, now it is instant.
      • Bet you it wasn’t lz4 but the fact that you don’t have an SLOG and have sync writes on the VMs.
      • Been running several terabytes of VM's on LZ4 for 5 years now. Just about any modern CPU will be able to compress/decompress at line speed.
      • I've ran dozens of VM's off of FreeNAS/TrueNAS with LZ4 enabled over NFS and iSCSI. Never had a problem. On an all flash array I had(with tons of RAM and 10Gb networking), reboots generally took less than 6 seconds from hitting "reboot" to being at the login screen again.
    • ZFS compression on sparce zvol - space difference · Issue #10260 · openzfs/zfs · GitHub
      • Q: I'm compressing a dd img of a 3TB drive onto a zvol in ZFS for Linux. I enabled compression (lz4) and let it transfer. The pool just consists of one 3TB drive (for now). I am expecting to have 86Gigs more in zfs list than I appear to.
      • A:
        • 2.72 TiB * 0.03125 = approximately 85 GiB reserved for spa_slop_space - that is, the space ZFS reserves for its own use so that you can't run out of space while, say, deleting things.
        • If you think that's too much reserved, you can tune spa_slop_shift from 5 to 6 - the formula is [total space] * 1/2^(spa_slop_shift), so increasing it from 5 to 6 will halve the usage.
        • I'm not going to try and guess whether this is a good idea for your pool. It used to default to 6, so it's probably not going to cause you problems unless you get into serious edge cases and completely out of space.
  • Scrub and Resilver
    • zfs: scrub vs resilver (are they equivalent?) - Server Fault
      • A scrub reads all the data in the zpool and checks it against its parity information.
      • A resilver re-copies all the data in one device from the data and parity information in the other devices in the vdev: for a mirror it simply copies the data from the other device in the mirror, from a raidz device it reads data and parity from remaining drives to reconstruct the missing data.
      • They are not the same, and in my interpretation they are not equivalent. If a resilver encounters an error when trying to reconstruct a copy of the data, this may well be a permanent error (since the data can't be correctly reconstructed any more). Conversely if a scrub detects corruption, it can usually be fixed from the remaining data and parity (and this happens silently at times in normal use as well).
    • zpool-scrub.8 — OpenZFS documentation
    • zpool-resilver.8 — OpenZFS documentation
    • zfs: scrub vs resilver (are they equivalent?) - Server Fault
      • A scrub reads all the data in the zpool and checks it against its parity information.
      • A resilver re-copies all the data in one device from the data and parity information in the other devices in the vdev: for a mirror it simply copies the data from the other device in the mirror, from a raidz device it reads data and parity from remaining drives to reconstruct the missing data.
      • They are not the same, and in my interpretation they are not equivalent. If a resilver encounters an error when trying to reconstruct a copy of the data, this may well be a permanent error (since the data can't be correctly reconstructed any more). Conversely if a scrub detects corruption, it can usually be fixed from the remaining data and parity (and this happens silently at times in normal use as well).
  • ashift
    • What is ashift?
      • TrueNAS ZFS uses by default, ashift=12 (4k reads and writes), which will work with 512n/512e/4Kn drives without issue because the ashift is larger or equal to the physical sector size of the drive.
      • You can use a higher ashift than the drives physical sectors without a performance hit as ZFS will make sure the sector boundries all line up correctly, but you should never use a lower ashift size as this will cause a massive performance hit and could cause data corruption.
      • You can use ashift=12 on a 512n/512e/4kn (512|4096 Bytes Logical Sectors) drives.
      • ashift is immutable and is set per vdev, not per pool. Once set it cannot be changed.
      • The smallest ashift ZFS uses is ashift=12
      • Windows will always use the logical block size presented to it. so a 512e (512/4096) will use 512 sector sizes, but ZFS can override this and use 4K blocks by using ashift. In fact ZFS will read/write in 8x512 blocks.
      • ZFS with ashift=12 will always read/write in 4k blocks and will be correctly aligned to the drives underlying physical boundries.
    • What ashift are my vdevs/pool using?
    • Performance
      • ZFS tuning cheat sheet – JRS Systems: the blog
        • Ashift tells ZFS what the underlying physical block size your disks use is. It’s in bits, so ashift=9 means 512B sectors (used by all ancient drives), ashift=12 means 4K sectors (used by most modern hard drives), and ashift=13 means 8K sectors (used by some modern SSDs).
        • If you get this wrong, you want to get it wrong high. Too low an ashift value will cripple your performance. Too high an ashift value won’t have much impact on almost any normal workload.
        • Ashift is per vdev, and immutable once set. This means you should manually set it at pool creation, and any time you add a vdev to an existing pool, and should never get it wrong because if you do, it will screw up your entire pool and cannot be fixed.
        • Best Value = 12
      • ZFS Tuning Recommendations | High Availability - Guide to tuning and optimising a ZFS file system.
        • The ashift property determines the block allocation size that ZFS will use per vdev (not per pool as is sometimes mistakenly thought).
        • Ideally this value should be set to the sector size of the underlying physical device (the sector size being the smallest physical unit that can be read or written from/to that device).
        • Traditionally hard drives had a sector size of 512 bytes; nowadays most drives come with a 4KiB sector size and some even with an 8KiB sector size (for example modern SSDs).
        • When a device is added to a vdev (including at pool creation) ZFS will attempt to automatically detect the underlying sector size by querying the OS, and then set the ashift property accordingly. However, disks can mis-report this information in order to provide for older OS's that only support 512 byte sector sizes (most notably Windows XP). We therefore strongly advise administrators to be aware of the real sector size of devices being added to a pool and set the ashift parameter accordingly.
      • Sector size for SSDs | TrueNAS Community
        • There is no benefit to change the default values of TrueNAS, except if your NVME SSD has 8K physical sectors, in this case you have to use ashift=13
      • TrueNAS 12 4kn disks | TrueNAS Community
        • Q: Hi, I'm new to TrueNAS and I have some WD drives that should be capable to convert to 4k sectors. I want to do the right thing to get the best performance and avoid emulation. The drives show as 512e (512/4096)
        • A: There will be no practically noticeable difference in performance as long as your writes are multiples of 4096 bytes in size and properly aligned. Your pool seems to satisfy both criteria, so it should be fine.
        • FreeBSD and FreeNAS have a default ashift of 12 for some time now. Precisely for the proliferation of 4K disks. The disk presenting a logical block size of 512 for backwards compatibility is normal.
      • Project and Community FAQ — OpenZFS documentation
        • Improve performance by setting ashift=12: You may be able to improve performance for some workloads by setting ashift=12. This tuning can only be set when block devices are first added to a pool, such as when the pool is first created or when a new vdev is added to the pool. This tuning parameter can result in a decrease of capacity for RAIDZ configurations.
        • Advanced Format (AF) is a new disk format which natively uses a 4,096 byte, instead of 512 byte, sector size. To maintain compatibility with legacy systems many AF disks emulate a sector size of 512 bytes. By default, ZFS will automatically detect the sector size of the drive. This combination can result in poorly aligned disk accesses which will greatly degrade the pool performance.
        • Therefore, the ability to set the ashift property has been added to the zpool command. This allows users to explicitly assign the sector size when devices are first added to a pool (typically at pool creation time or adding a vdev to the pool). The ashift values range from 9 to 16 with the default value 0 meaning that zfs should auto-detect the sector size. This value is actually a bit shift value, so an ashift value for 512 bytes is 9 (2^9 = 512) while the ashift value for 4,096 bytes is 12 (2^12 = 4,096).
    • Misc
      • These are the different ashift values that you might come across and will help show you what they mean visually. Every ashift upwards is twice as large as the last one. The ashift values range from 9 to 16 with the default value 0 meaning that zfs should auto-detect the sector size.
        ashift / ZFS Block size (Bytes)
        0=Auto
        9=512
        10=1024
        11=2048
        12=4096
        13=8196
        14=16384
        15=32768
        16=65536
      • Preferred Ashift by George Wilson - YouTube | OpenZFS - From OpenZFS Developer Summit 2017 (day 2)
      • ashifting a-gogo: mixing 512e and 512n drives | TrueNAS Community
        • Q:
          • The *33 are SATA and 512-byte native, the *34 are SAS and 512-byte emulated. According to Seagate datasheets.
          • I've mixed SAS and SATA often, and that seems to always work fine. But afaik, mixing 512n and 512e is a new one for me.
          • Before I commit for the lifetime of this RAIDZ3 pool, is my own conclusion correct: all this needs is an ashift of 12 and we're good to go...?
        • A: Yes
  • Pool (ZPool / ZFS Pool / Storage Pool)
    • General
      • A Pool is a combination of one or more VDEVs, but at least one DATA VDEV.
      • If you have multiple VDEVs then the pool is striped across the VDEVs.
      • The pool is mounted in the filesystem (eg /mnt/Magnetic_Storage) and all datasets within this.
      • Pools | Documentation Hub
        • Tutorials for creating and managing storage pools in TrueNAS SCALE.
        • Storage pools are attached drives organized into virtual devices (vdevs). ZFS and TrueNAS periodically reviews and “heals” whenever a bad block is discovered in a pool. Drives are arranged inside vdevs to provide varying amounts of redundancy and performance. This allows for high performance pools, pools that maximize data lifetime, and all situations in between.
      • TrueNAS Storage Primer on ZFS for Data Storage Professionals | TrueNAS
        • Storage Pools
          • The highest level of storage abstraction on TrueNAS is the storage pool. A storage pool is a collection of storage devices such as HDDs, SSDs, and NVDIMMs, NVMe, that enables the administrator to easily manage storage utilization and access on the system.
          • A storage pool is where data is written or read by the various protocols that access the system. Once created, the storage pool allows you to access the storage resources by either creating and sharing file-based datasets (NAS) or block-based zvols (SAN).
    • ZFS Record Size
      • About ZFS recordsize – JRS Systems: the blog
        • ZFS stores data in records, which are themselves composed of blocks. The block size is set by the ashift value at time of vdev creation, and is immutable.
        • The recordsize, on the other hand, is individual to each dataset(although it can be inherited from parent datasets), and can be changed at any time you like. In 2019, recordsize defaults to 128K if not explicitly set.
    • Planning a Pool
      • How many drives do I need for ZFS RAID-Z2? - Super User
        • An in-depth and answer.
        • Hence my recommendation: If you want three drives ZFS, and want redundancy, set them up as a three-way mirror vdev. If you want RAID-Z2, use a minimum of four drives, but keep in mind that you lock in the number of drives in the vdev at the time of vdev creation. Currently, the only way to grow a ZFS pool is by adding additional vdevs, or increasing the size of the devices making up a vdev, or creating a new pool and transferring the data. You cannot increase the pool's storage capacity by adding devices to an existing vdev.
      • Path to Success for Structuring Datasets in Your Pool | TrueNAS Community
        • So you've got a shiny new FreeNAS server, just begging to have you create a pool and start loading it up. Assuming you've read @jgreco's The path to success for block storage sticky, you've decided on the composition of your pool (RAIDZx vs mirrors), and built your pool accordingly. Now you have an empty pool and a pile of bits to throw in.
        • STOP! You'll need to think at this point about how to structure your data.
    • Creating Pools
      • Creating Storage Pools | Documentation Hub
        • Provides information on creating storage pools and using VDEV layout options in TrueNAS SCALE.
        • Storage pools attach drives organized into virtual devices called VDEVs. ZFS and TrueNAS periodically review and heal when discovering a bad block in a pool. Drives arranged inside VDEVs provide varying amounts of redundancy and performance. ZFS and VDEVs combined create high-performance pools that maximize data lifetime.
        • All pools must have a data VDEV. You can add as many VDEV types (cache, log, spare, etc.) as you want to the pool for your use case but it must have a data VDEV.
      • Creating Pools (CORE) | Documentation Hub
        • Describes how to create pools on TrueNAS CORE.
        • Has some more information on VDEVs.
      • The storage pool is mounted under its name (/mnt_Magnetic_Storage) and all datasets (File system / ZVol / iSCSI) are nested under this and visible to the OS here.
    • Managing Pools
    • Expanding a Pool
    • Export/Disconnect or Delete a Pool
      • There is no dedicated delete option
        • You have the option when you are disconnecting the pool, to destroy the pool data on the drives and this option (I don't think) does not do a driove zero-fill style wipe for the whole drive, just the relevant pool data.
        • You need to disconnect the drive cleanly from the pool so you can delete it, hence this is why there is no delete button and is only part of the disconnect process.
      • Storage --> [Pool-Name] --> Export/Disconnect
      • Managing Pools | Documentation Hub
        • The Export/Disconnect option allows you to disconnect a pool and transfer drives to a new system where you can import the pool. It also lets you completely delete the pool and any data stored on it.
      • Migrating ZFS Storage Pools
        • NB: These notes are based on SolarisZFS but the wording is still true.
        • Occasionally, you might need to move a storage pool between systems. To do so, the storage devices must be disconnected from the original system and reconnected to the destination system. This task can be accomplished by physically recabling the devices, or by using multiported devices such as the devices on a SAN. ZFS enables you to export the pool from one machine and import it on the destination system, even if the system are of different architectural endianness.
        • Storage pools should be explicitly exported to indicate that they are ready to be migrated. This operation flushes any unwritten data to disk, writes data to the disk indicating that the export was done, and removes all information about the pool from the system.
        • If you do not explicitly export the pool, but instead remove the disks manually, you can still import the resulting pool on another system. However, you might lose the last few seconds of data transactions, and the pool will appear faulted on the original system because the devices are no longer present. By default, the destination system cannot import a pool that has not been explicitly exported. This condition is necessary to prevent you from accidentally importing an active pool that consists of network-attached storage that is still in use on another system.
      • Export/Disconnect Window | Documentation Hub
        • Export/Disconnect opens the Export/disconnect pool: poolname window that allows users to export, disconnect, or delete a pool.
        • Exporting/disconnecting can be a destructive process! Back up all data before performing this operation. You might not be able to recover data lost through this operation.
        • Disks in an exported pool become available to use in a new pool but remain marked as used by an exported pool. If you select a disk used by an exported pool to use in a new pool the system displays a warning message about the disk.
        • Disconnect Options
          • Destroy data on this pool?
            • Select to erase all data on the pool. This deletes the pool data on the disks and effectively deleting all data.
          • Delete configuration of shares that use this pool?
            • Remove the share connection to this pool. Exporting or disconnecting the pool deletes the configuration of shares using this pool. You must reconfigure the shares affected by this operation.
          • Confirm Export/Disconnect *
            • Activates the Export/Disconnect button.
      • exporting my pool | TrueNAS Community
        • Q: I just upgraded my TrueNAS and i need to move the drives from the old TrueNAS to my new TrueNAS. Can I just disconect theme and plug them in in to my new TrueNAS?
        • A:
          • Export the pool only if you're not taking the boot pool/drive with you.
          • If all drives will move, it will be fine.
          • Be aware of things like different NIC in the new system as that can mess with jails or VMs, but otherwise all should be simple.
    • Rename a Pool
      • This is not an easy thing to do.
      • How To Rename a ZFS Pool | TrueNAS Community
        • Instructions
        • The basic process to rename a ZFS pool is to export it from the GUI, import it in the CLI with the new name, then export it again, and re-import it in the GUI.
        • I find I normally want to do this after creating a new pool (with perhaps a different set of disks/layout), replicating my old pool to the new pool, and then I want to rename the new pool to the same as the old pool, and then all the shares work correctly, and its fairly transparent. Mostly.
      • Changing pool name | TrueNAS Community
        • Export the pool through the GUI. Be sure not to check the box to destroy all data.
        • From the CLI: zpool import oldpoolname newpoolname
        • From the CLI: zpool export newpoolname
        • From the GUI, import the pool.
      • renaming pool with jails/vms | TrueNAS Community - i need to rename a pool, its the pool with my jails and vms on it.
  • VDEV (OpenZFS Virtual Device)
    • VDEVs, or Virtual DEVices, are the logical devices that make up a Storage Pool and they are created from one or usually more Disks. ZFS has many different types of VDEV.
    • Drives are arranged inside VDEVs to provide varying amounts of redundancy and performance. VDEVs allow for the creation of high-performance pools that maximize data lifetime.
    • rueNAS Storage Primer on ZFS for Data Storage Professionals | TrueNAS
      • vdevs
        • The next level of storage abstraction in OpenZFS, the vdev or virtual device, is one of the more unique concepts around OpenZFS storage.
        • A vdev is the logical storage unit of OpenZFS storage pools. Each vdev is composed of one or more HDDs, SSDs, NVDIMMs, NVMe, or SATA DOMs.
        • Data redundancy, or software RAID implementation, is defined at the vdev level. The vdev manages the storage devices within it freeing higher level ZFS functions from this task.
        • A storage pool is a collection of vdevs which, in turn, are an individual collection of storage devices. When you create a storage pool in TrueNAS, you create a collection of vdevs with a certain redundancy or protection level defined.
        • When data is written to the storage pool, the data is striped across all the vdevs in the storage pool. You can think of a collection of vdevs in a storage pool as a RAID 0 stripe of virtual storage devices. Much of OpenZFS performance comes from this striping of data across the vdevs in a storage pool.
        • In general, the more vdevs in a storage pool, the better the performance. Similar to the general concept of RAID 0, the more storage devices in a RAID 0 stripe, the better the read and write performance.
    • Understanding ZFS vdev Type | Klara Systems
      • Excellent Explanation
      • The most common category of ZFS questions is “how should I set up my pool?” Sometimes the question ends “... using the drives I already have” and sometimes it ends with “and how many drives should I buy." Either way, today’s article can help you make sense of your options.
      • Note that a zpool does not directly contain actual disks (or other block/character devices, such as sparse files)! That’s the job of the next object down, the vdev.
      • vdev (Short for virtual device) whether "support or storage", is a collection of block or character devices (for the most part, disks or SSDs) arranged in a particular topology.
    • SOLVED - Clarification on different vdev types | TrueNAS Community
      • Data: Stores the files themselves, and everything else if no special vdevs are used.
      • Cache: I believe this is what people refer to as L2ARC, basically a pool-specific extension of the RAM-based ARC. Can improve read speeds by caching some files on higher speed drives. Should not be used on a system with less than 32/64GB (couldn't find a strong consensus there) or it may hurt performance by using up RAM. Should be less than 10x the total system RAM in size. Should be high speed and high endurance (since it's written to a lot), but failure isn't a huge deal as it won't cause data loss. This won't really do anything unless the system is getting a lot of ARC misses.
      • Log: I believe this is what people refer to as SLOG, a separate, higher speed vdev for write logs. Can improve speeds for synchronous writes. A synchronous write is when the ZFS write-data (not the files themselves, but some sort of ZFS-specific write log) is written to the RAM cache (ARC) and the pool (ZIL or SLOG if available) at the same time, vs an asynchronous write where it's written to ARC, then eventually gets moved to the pool. SLOG basically replaces the ZIL, but with faster storage, allowing sync writes to complete faster. Should be high speed, but doesn't need to be super high endurance like cache, since it sees a lot less writes. (Edit: I don't actually know this to be true. jgreco's guide on SLOGs says it should be high endurance, so maybe I don't understand exactly what the 'intent log' data is) Won't do anything for async writes, and general file storing is usually mostly async.
      • Hot Spare: A backup physical drive (or multiple drives) that are kept running, but no data is written to. In the event of a disk failure, the hot spare can be used to replace the failed disk without needing to physically move any disks around. Hotspare disks should be the same disks as whatever disks they will replace.
      • Metadata: A Separate vdev for storing just the metadata of the main data vdev(s), allowing it to be run on much faster storage. This speeds up file browsing or searching, as well as reading lots of files (at least, it speeds up the locating of the files, not the actual reading itself). If this vdev dies, the whole pool dies, so this should be a 2/3-way mirror. Should be high speed, but doesn't need super high endurance like cache.
      • Dedup: Stores the de-duplication tables for the data vdev(s) on faster storage, (I'm guessing) to speed up de-duplication tasks. I haven't really come across many posts about this, so I don't really know what the write frequency looks like.
      • Explaining ZFS LOG and L2ARC Cache (VDEV) : Do You Need One and How Do They Work? - YouTube | Lawrence Systems
    • Fixing my worst TrueNAS Scale mistake! - YouTube | Christian Lempa
      • In this video, I'll fix my worst mistake I made on my TrueNAS Scale Storage Server. We also talk about RAID-Z layouts, fault tolerance and ZFS performance. And what I've changed to make this server more robust and solid!
      • Do not add too many drives to single Vdev
      • RAID-Z2 = I can allow for 2 drives to fail
      • Use SSD for the pool that holds the virtual disks and Apps
    • Types/Definitions
      • Data
        • (from SCALE GUI) Normal vdev type, used for primary storage operations. ZFS pools always have at least one DATA vdev.
        • You can configure the DATA VDEV in one of the following topologies:
          • Stripe
            • Requires at least one disk
            • Each disk is used to store data. has no data redundancy.
            • The simplest type of vdev.
            • This is the absolute fastest vdev type for a given number of disks, but you’d better have your backups in order!
            • Never use a Stripe type vdev to store critical data! A single disk failure results in losing all data in the vdev.
          • Mirror
            • Data is identical in each disk. Requires at least two disks, has the most redundancy, and the least capacity.
            • This simple vdev type is the fastest fault-tolerant type.
            • In a mirror vdev, all member devices have full copies of all the data written to that vdev.
            • A standard RAID1 mirror
          •  RAID-Z1
            • Requires at least three disks.
            • ZFS software 'distributed' parity based RAID.
            • Uses one disk for parity while all other disks store data.
            • This striped parity vdev resembles the classic RAID5: the data is striped across all disks in the vdev, with one disk per row reserved for parity.
            • When using 4 disks, 1 drive can fail. Minimum 4 disks required.
          • RAID-Z2
            • Requires at least four disks.
            • ZFS software 'distributed' parity based RAID
            • Uses two disks for parity while all other disks store data.
            • The second (and most commonly used) of ZFS’ three striped parity vdev topologies works just like RAIDz1, but with dual parity rather than single parity
            • You only have 50% of the total disk space available to use.
            • When using 4 disks, 2 drives can fail. Minimum 4 disks required.
          • RAID-Z3
            • Requires at least five disks.
            • ZFS software 'distributed' parity based RAID
            • Uses three disks for parity while all other disks store data.
            • This final striped parity topology uses triple parity, meaning it can survive three drive losses without catastrophic failure.
            • You only have 25% of the total disk space available for use.
            • When using 4 disks, 3 drives can fail. Minimum 4 disks required.
      • Cache
        • A ZFS L2ARC read-cache that can be used with fast devices to accelerate read operations.
        • An optional vdev you can add or remove after creating the pool, and is only useful if the RAM is maxed out.
        • Aaron Toponce : ZFS Administration, Part IV- The Adjustable Replacement Cache
          • This is a deep-dive inot the L2ARC system.
          • Level 2 Adjustable Replacement Cache, or L2ARC - A cache residing outside of physical memory, typically on a fast SSD. It is a literal, physical extension of the RAM ARC.
        • OpenZFS: All about the cache vdev or L2ARC | Klara Inc - CACHE vdev, better known as L2ARC, is one of the well-known support vdev classes under OpenZFS. Learn more about how it works and when is the right time to wield this powerful tool.
      • Log
        • A ZFS LOG device that can improve speeds of synchronous writes.
        • An optional write-cache that you can add or remove after creating the pool.
        • A dedicated VDEV for ZFS’s intent log, it can improve performance
      • Hot Spare
        • Drive reserved for inserting into DATA pool vdevs when an active drive has failed.
        • From CORE doc
          • Hot Spare are drives reserved to insert into Data vdevs when an active drive fails. Hot spares are temporarily used as replacements for failed drives to prevent larger pool and data loss scenarios.
          • When a failed drive is replaced with a new drive, the hot spare reverts to an inactive state and is available again as a hot spare.
          • When the failed drive is only detached from the pool, the temporary hot spare is promoted to a full data vdev member and is no longer available as a hot spare.
      • Metadata
        • A Special Allocation class, used to create Fusion Pools.
        • An optional vdev type which is used to speed up metadata and small block IO.
        • A dedicated VDEV to store Metadata
      • Dedup
        • A dedicated VDEV to Store ZFS de-duplication tables
        • Deduplication is not recommended (level1)
        • Requires allocating X GiB for every X TiB of general storage. For example, 1 GiB of Dedup vdev capacity for every 1 TiB of Data vdev availability.
      • File
        • A pre-allocated file.
        • TrueNAS does not support this.
      • Physical Drive (HDD, SDD, PCIe NVME, etc)
        • TrueNAS does not support this. Unless this is ZVol?.
      • dRAID (aka Distributed RAID)
        • TrueNAS does not support this.
        • dRAID — OpenZFS documentation
          • dRAID is a variant of raidz that provides integrated distributed hot spares which allows for faster resilvering while retaining the benefits of raidz. A dRAID vdev is constructed from multiple internal raidz groups, each with D data devices and P parity devices. These groups are distributed over all of the children in order to fully utilize the available disk performance. This is known as parity declustering and it has been an active area of research. The image below is simplified, but it helps illustrate this key difference between dRAID and raidz.
        • OpenZFS 2.1 is out—let’s talk about its brand-new dRAID vdevs | Ars Technica - dRAID vdevs resilver very quickly, using spare capacity rather than spare disks.
      • Special
        • TrueNAS does not support this
        • The SPECIAL vdev is the newest support class, introduced to offset the disadvantages of DRAID vdevs (which we will cover later). When you attach a SPECIAL to a pool, all future metadata writes to that pool will land on the SPECIAL, not on main storage.
        • Losing any SPECIAL vdev, like losing any storage vdev, loses the entire pool along with it. For this reason, the SPECIAL must be a fault-tolerant topology
  • Dataset
    • What is a dataset and what does it do? newbie explanation:
      • It is a filesystem:
        • It is container that holds a filesystem, similiar to a hardrive holding a single NTFS paritition.
        • The dataset's file system can be `n` folders deep, there is no limit.
        • This associated filesystem can be mounted or unmounted. This will not affect the datasets configurability or its place in the heirarchy but will affect the ability to access it's files in the file system.
      • Can have Child Datasets:
        • A dataset can have nested datasets within it.
        • These datasets will appear as a folder in it's parent's dataset file system.
        • These datasets can inherit the permissions from its parent dataset or it can have its own.
        • Each child dataset has its own independant filesystem which is access thorugh its folder in the parent's filesystem.
      • Each dataset can be configured:
        • A dataset defines a single configuration that is used by all of it's file system folders and files. Child datasets will also use this configuration if they are set to inherit the config/settings.
        • A dataset configuration can define: compression level, access control (ACL) and much more.
        • As long as you have the pemissions, you can browse through all of a datasets files system, child datasets all from the root/parent dataset, or where you set the share from (obviously you cannot go up further than where the share is mounted). They will act like one file systems but with some folders (As defined by datasets) having different permissions.
        • You set permissions (and other things) per dataset, not per folder.
    • Always use SMB for dataset share type
      • Unless you know different and why, you should always set your datasets to use SMB as this will utilise the modern ACL that TrueNAS provides.
    • General
      • Datasets | Documentation Hub
      • Adding and Managing Datasets | Documentation Hub
        • Provides instructions on creating and managing datasets.
        • A dataset is a file system within a data storage pool. Datasets can contain files, directories (child datasets), and have individual permissions or flags. Datasets can also be encrypted, either using the encryption created with the pool or with a separate encryption configuration.
        • TruenNAS recommend organizing your pool with datasets before configuring data sharing, as this allows for more fine-tuning of access permissions and using different sharing protocols.
      • TrueNAS Storage Primer on ZFS for Data Storage Professionals | TrueNAS
        • Datasets
          • A dataset is a named chunk of storage within a storage pool used for file-based access to the storage pool. A dataset may resemble a traditional filesystem for Windows, UNIX, or Mac. In OpenZFS, a raw block device, or LUN, is known as a zvol. A zvol is also a named chunk of storage with slightly different characteristics than a dataset.
          • Once created, a dataset can be shared using NFS, SMB, AFP, or WebDAV, and accessed by any system supporting those protocols. Zvols are accessed using either iSCSI or Fibre Channel (FC) protocols.
      • TrueNAS Scale: A Step-by-Step Guide to Dataset, Shares, and App Permissions - YouTube | Lawrence Systems
      • 8. Create Dataset - Storage — FreeNAS® User Guide 9.10.2-U2 Table of Contents - An existing ZFS volume can be divided into datasets. Permissions, compression, deduplication, and quotas can be set on a per-dataset basis, allowing more granular control over access to storage data. A dataset is similar to a folder in that you can set permissions; it is also similar to a filesystem in that you can set properties such as quotas and compression as well as create snapshots.
    • Tutorials
    • Use SMB for dataset share type
  • ZVol
    • What is a ZVol? newbie explanation:
      • A ZFS Volume (zvol) is a dataset that represents a block device or virtual disk drive.
      • It does not have a file system.
      • It is similiar to a Virtual disk file.
      • It can inherit permissions of it's parent dataset or have it's own.
    • Zvol = ZFS Volume = Zettabyte File System Volume
    • Zvol store no meta data in them, ie sector size, this is all stored in TrueNAS config (VM/iSCSI config)
    • Adding and Managing Zvols | Documentation Hub
      • Provides instructions on creating, editing and managing zvols.
      • A ZFS Volume (zvol) is a dataset that represents a block device or virtual disk drive.
      • TrueNAS requires a zvol when configuring iSCSI Shares.
      • Adding a virtual machine also creates a zvol to use for storage.
      • Storage space you allocate to a zvol is only used by that volume, it does not get reallocated back to the total storage capacity of the pool or dataset where you create the zvol if it goes unused.
    • 8. Create ZVol - Storage — FreeNAS® User Guide 9.10.2-U2 Table of Contents - A zvol is a feature of ZFS that creates a raw block device over ZFS. This allows you to use a zvol as an iSCSI device extent.
  • boot-pool
    • check Status
      • System Settings --> Boot --> Boot Pool Status
  • Troubleshooting ZFS
    • Can’t import pools on new system after motherboard burnt on power up | TrueNAS Community
      • My motherboard made zappy sounds and burnt electrical smell yesterday as I was powering it on. So I pulled the power straight away.
      • We almost need a Newbie / Noob guide to success. Something that says, don't use L2ARC, SLOG, De-Dup, Special Meta-devices, USB, hardware RAID, and other things we see here. After they are no longer Newbies / Noobs, they will then understand what some of those are and when to use / not use them.
      • A worked forum thread on some ideas on how to proceed and a good example of what to do in case of mobo failure.
    • Does a dataset get imported automatically when a pool from a previous version is imported? | TrueNAS Community
      • Q:
        • My drive for the NAS boot physically failed and I had to install a new boot drive. I installed the most current version of FreeNAS on it. Then Accounts were re-created and I imported the pool from the existing storage disk.
        • The instructions are unclear at this point. Does the pool import also import the dataset that was created in the previous install or will I need to add a new dataset to the pool that I just imported? Seems like the later is the correct answer but I want to make sure before I make an non-reversible mistake.
      • A:
        • Yes - importing a pool means you imported the pool's datasets as well, because they are part of the pool.
        • It might be better to say that there's no "import" for datasets, because, as you note, they're simply part of the pool. Importing the pool imports everything on the pool, including files and zvols and datasets and everything.
        • However, you will have lost any configuration related to sharing out datasets or zvols unless you had a saved version of the configuration.
      • Q:
        • In reference to the imported pool/data on this storage disk. The manual states that data is deleted when a dataset is deleted. It doesn't clarify what happens when the configuration is lost. Can I just create a new dataset and set up new permissions to access the files from the previous build or is the data in this pool unaccessable forever. (I.E. do I need to start over or can I reattach access permissions to the existing data)?
      • A:
        • FreeNAS saves the configuration early each morning by default. If you had your system dataset on your data pool you'll be able to get to it. See post 35 in this thread Update went wrong | Page 2 | TrueNAS Community for details.
        • You may want to consider putting the system dataset on your data pool if not already done so - (CORE) System --> System Dataset
        • Those two things are wildly different kind. Your configuration database is data written to a ZFS pool. A ZFS pool is a collection of vdevs on which you create filesystems called datasets. If you delete a filesystem, the information written on it is lost. Some things can be done to recover the data on destroyed filesystems, but in the case of ZFS it’s harder then in other cases. If you delete a dataset, consider the data lost, or send the drives to a data recovery company specializing in ZFS.
    • ZFS Recovery
    • Update went wrong | Page 2 | TrueNAS Community
      • The config db file is named freenas-v1.db and is located at: /data
      • However, if that directory is located on the USB boot device that is failed, this may not help at all.
      • You can recover a copy that is automatically saved for you in the system dataset, if the system dataset is on the storage pool.
      • For people like me, I moved the system dataset to the boot pool, this is no help, but the default location of the system dataset is on the storage pool.
      • If you do a fresh install of FreeNAS on a new boot media, and import the storage pool, you should find the previous config db at this path:
        /var/db/system/ plus another directory that will be named configs-****random_characters****.

Backup

Backup Types

  • TrueNAS Config
    • Your servers settings including such things as: ACL, Users, Virtual Machine configs, iSCSI configs.
  • Dataset Full Replication
    • Useful for making a single backup of a dataset manually.
  • Dataset Incremental Replication (Rolling Backup)
    • A full backup is maintained but only changes are sent reducing bandwidth usage.
    • These are useful for setting up automated backups.
  • Files - Copy files only
    • This is the traditional method of backing up.
    • This can be used to copy files to a non-ZFS system.
  • Cloud Sync Task
    • PUSH/PULL files from a Cloud provider
  • General
    • Backing Up TrueNAS | Documentation Hub
      • Provides general information and instructions on setting up data storage backup solutions, saving the system configuration and initial system debug files, and creating a boot environment.
      • Cloud sync for Data Backup
      • Replication for Data Backup
      • Backing Up the System Configuration
      • Downloading the Initial System Debug File
    • Data Backups | Documentation Hub
      • Describes how to configure data backups on TrueNAS CORE. With storage created and shared, it’s time to ensure TrueNAS data is effectively backed up.
      • TrueNAS offers several options for backing up data. `Cloud Sync`, and `Replication`
    • Data Protection | Documentation Hub - Tutorials related to configuring data backup features in TrueNAS SCALE.
    • System Dataset (CORE) | Documentation Hub
      • The system dataset stores debugging core files, encryption keys for encrypted pools, and Samba4 metadata such as the user and group cache and share level permissions.
    • TruenNAS: Backup Immutability & Hardening - YouTube Lawrence Systems - A strategic overview of the backup process using immutable backup repositories.
  • TrueNAS Configuration Backup
    • Using Configuration Backups (CORE) | Documentation Hub
      • Provides information concerning configuration backups on TrueNAS CORE. I copuld not find the SCALE version.
      • Backup configs store information for accounts, network, services, tasks, virtual machines, and system settings. Backup configs also index ID’s and credentials for account, network, and system services. Users can view the contents of the backup config using database viewing software like SQLite DB Browser.
      • Automatic Backup - TrueNAS automatically backs up the configuration database to the system dataset every morning at 3:45 (relative to system time settings). However, this backup does not occur if the system is off at that time. If the system dataset is on the boot pool and it becomes unavailable, the backup also loses availability.
      • Important - You must backup SSH keys separately. TrueNAS does not store them in the configuration database. System host keys are files with names beginning with ssh_host_ in /usr/local/etc/ssh/. The root user keys are stored in /root/.ssh.
      • These notes are based on CORE.
      • Download location
        • (CORE) System --> General --> Save Config
        • (SCALE) system Settings --> General --> Manage Configuration (button top left) --> Download File
  • System Dataset (TrueNAS configuration) - should this be in the dataset section -----******???
    • The system dataset stores critical data like debugging core files, encryption keys for pools, and Samba 4 metadata such as the user/group cache and share level permissions.
    • The root dataset of the first pool you create automatically becomes the `system dataset`. In most peoples cases this is the `boot-pool` because you only have your boot drive(s) installed when setting up TrueNAS. TureNAS sets up the pool with the relevant ZFS/Pool/Vdev configuration on your boot drive(s).
    • This dataset can be in a couple of places as TrueNAS automatically moves the system dataset to the most appropriate pool by using these rules:
      1. When you create your first storage pool, TrueNAS automatically moves the `system dataset`to the new storage pool away from the`boot-pool` as this give much better protection to your system.
      2. Exporting the pool with the system dataset on it will cause TrueNAS to transfer the system dataset to another available pool. If the only available pool is encrypted, that pool will no longer be able to be locked. When no other pools exist, the system dataset transfers back to the TrueNAS operating system device (`boot-pool`).
    • You can manually move this dataset yourself
      • System Settings --> Advanced --> Storage --> Configure --> System Dataset Pool
    • Setting the System Dataset (CORE) | Documentation Hub
      • Describes how to configure the system dataset on TrueNAS CORE.
      • Not sure if this all still applies.
  • System Dataset / Boot Drive
    • Should I RAID/Mirror the boot drive?
      • Never use a hardware RAID when you are using TrueNAS, as it is pointless and will cause errors along the way.
      • TrueNAs would not put the option to RAID the boot drive if it was pointless.
      • Should I Raid the Boot drive and what size should the drives be? | TrueNAS Community - My thread.
        • 16 GB or more is sufficient for the boot drive.
        • It's not really necessary to mirror the boot drive. It's more critical to regularly back up your config. If you have a config backup and your boot drive goes south, reinstalling to a new boot drive and then uploading your config will restore your system like it never happened.
        • Setting up the mirror up during installation.
          • There is really no reason to wait until later, unless you're doing more advanced tricks like partitioning the device to use a portion of it for L2ARC or other purposes.
        • Is it a good policy to make the boot drive mirrored? See different responses below:
          1. It's not really necessary to mirror the boot drive. It's more critical to regularly back up your config. If you have a config backup and your boot drive goes south, reinstalling to a new boot drive and then uploading your config will restore your system like it never happened.
          2. Probably, but it depends on your tolerance for downtime.
            • The config file is the important thing; if you have a backup of that (and you do, on your pool, if you can get to it; but it's better to download copies as you make significant system changes), you can restore your system to an identical state when a boot device fails. If you don't mind that downtime (however long it takes you to realize the failure, source and install a replacement boot device, reinstall TrueNAS, and upload the config file), then no, mirroring the boot devices isn't a particularly big deal.
            • If that downtime would be a problem for you, a second SSD for a boot mirror is cheap insurance.
        • = Yes, and I will let TrueNAS mirror the boot-drive during the installation as I don't want any downtime.
      • Copy the config on the boot drive to the storage drive
        • Is this the system dataset?
        • Best Boot Drive Size for FreeNAS | TrueNAS Community
          • And no, the only other thing you can put on the boot is the System Dataset. Which is a pity, I'd be very happy to be able to choose to put the jails dataset on there or swap.
          • FreeNAS initially puts the .system dataset on the boot pool. Once you create a data pool, though, it's moved there automatically.
    • Boot: RAID-1 to No Raid | TrueNAS Community
      • Q: Is there a way to remove a boot mirror and just replace it with a single USB drive, without reinstalling FreeNAS?
      • A: Yes, but why would you want to?
        zpool detach pool device
    • Best practices for System Dataset Pool location | TrueNAS Community
      • Do not let your drives spin down.
      • Q: From what I've read, by default the System Dataset pool is the main pool. In order to allow the HDDs on that pool to spin down, can the system dataset be moved to say a USB pen? Even to the freenas-boot - perhaps periodically keeping a mirror/backup of that drive?
      • Actually, you probably DONT want your disks to spin down. When they do, they end up spinning down and back up all day long. You will ruin your disks in no time doing that. A hard drive is meant to stop and restart only so many times. It is fine for a desktop to spin down because the disks will not start for hours and hours. But for a NAS, every network activity is subject to re-start the disks and often, they will restart every few minutes.
      • To have the system dataset in the main pool also helps you recover your system's data from the pool itself and not from the boot disk. So that is a second reason to keep it there.
      • Let go of the world you knew young padawan. The ZFS handles the mirroring of drives. Do not let spinners stop, the thermodynamics will weaken their spirit and connection to the ZFS. USB is the path to the dark side, the ZFS is best channeled through SAS/SATA and actually prices of SSDs are down to thumb drive prices even if you don’t look at per TB price..
      • Your plan looks like very complicated and again, will not be that good for the hard drive. To heat up and cool down, just like spinning up and down, is not good either. The best thing for HDD is to stay up, spinning and hot all the time.
      • What do you try to achieve by moving the system dataset out of the main pool ?
        • To let the main pool's drives spin down? = Bad idea
        • To let the main pool's drive cool down? = Bad idea
        • To save space in the main pool? = Bad idea (system dataset is very small, so no benefit here)
        • Because there is no benefit doing it, doing so remains a bad idea...
        • The constant IO will destroy a pendrive in a matter of months
  • Dataset Backup / Cloud Sync Tasks
  • Snapshots
    • Managing Snapshots | Documentation Hub - Provides instructions on managing ZFS snapshots in TrueNAS Scale.
      • Cloning Datasets
        • This will only allow cloning the Dataset to the same Pool.
          Datasets --> Data Protection --> Manage Snapshots --> [Source Snapshot] --> Clone To New Dataset
    • How To Use TrueNAS ZFS Snapshots For Ransomware Protection & VSS Shadow Copies - YouTube | Lawrence Systems
      • How to make the shadow copies immutable, i.e. not accessible by RansomWare.
      • Why you need to keep your passwords separate/different.
      • Enabling `Shadow Copies` on SMB shares. This allows Windows users to see previous versions of the file from Windows context menus.
      • Chapters
        • 0:00 The Ransomeware and Issues with Restoring
        • 3:02 The TrueNAS server setup
        • 4:07 Keeping Separate Root Password
        • 5:05 TrueNAS Dataset Configuration
        • 5:34 TrueNAS Share Configuration For VSS
        • 6:55 How To Setup Snapshots on TrueNAS
        • 10:49 Restoing TrueNAS Volume Shadow Copies in Windows
        • 12:30 TrueNAS cloning Snapshot to new dataset
        • 15:42 Performing TrueNAS full rollback with Snapshot
  • Misc
    • Hardened Backup Repository for Veeam | Documentation Hub
      • This guide explains in details how to create a Hardened Backup Repository for VeeamBackup with TrueNAS Scale that means a repository that will survive to any remote attack.
      • The main idea of this guide is the disabling of the webUI with an inititialisation script and a cron job to prevent remote deletion of the ZFS snapshots that guarantee data immutability.
      • The key points are:
        • Rely on ZFS snapshots to guarantee data immutability
        • Reduce the surface of attack to the minimum
        • When the setup is finished, disable all remote management interfaces
        • Remote deletion of snapshots is impossible even if all the credentials are stolen.
        • The only way to delete the snapshot is having physically access to the TrueNAS Server Console.
      • This is similar top what Wasabi can offer and is a great protection from Ransomware.
  • AWS S3 / Remote Backup
  • ZVol
    • These can be backed up by snapshots.
    • Backup by sharing out the ZVol of an iSCI share and image it as a disk on a Windows PC.
    • You can use the linux command 'dd' to performa  disk copy.
    • Complete backup (including zvols) to target system (ssh/rsync) with no ZFS support | TrueNAS Community
      • A zvol sent with zfs send is just a stream of bytes so instead of zfs receive into an equivalent zvol on the target system you can save it as a file.
        zfs send pool/path/to/zvol@20230302 | gzip -c >/mnt/some/location/zvol@20230302.gz
      • This file can be copied to a system without ZFS support. You will not be able to create incremental backups this way, though. Each copy takes up the full space - not the nominal size, of course, but all the data "in" the zvol after compression.
      • For restore just do the inverse
        gzip -dc /mnt/some/location/zvol@20230302.gz | zfs receive pool/path/to/zvol
      • This can probably be used for moving a ZVol aswell.

 

iSCSI

  • General
    • IP based hardrive. It presents as a hard drive so remote OS windows, linux and other OS can use as such.
    • This can be formatted like any drive to whatever format you want.
    • What is iSCSI and How Does it Work? - The iSCSI protocol allows the SCSI command to be sent over LANs, WANs and the internet. Learn about its role in modern data storage environments and iSCSI SANs.
      • iSCSI is a transport layer protocol that describes how Small Computer System Interface (SCSI) packets should be transported over a TCP/IP network.
      • allows the SCSI command to be sent end-to-end over local-area networks (LANs), wide-area networks (WANs) or the internet.
    • What Is iSCSI & How Does It Work? | Enterprise Storage Forum - iSCSI (Internet Small Computer Systems Interface) is a transport layer protocol that works on top of the transport control protocol.
    • What is iSCSI and How Does it Work? - The iSCSI protocol allows the SCSI command to be sent over LANs, WANs and the internet. Learn about its role in modern data storage environments and iSCSI SANs.
    • iSCSI and zvols | [H]ard|Forum
      • Q:
        • Beginning the finals stages of my new server setup and I am aiming to use iSCSI to share my ZFS storage out to a Windows machine(WHS 2011 that will manage it and serve it to the PCs in my network), however I'm a little confused.
        • Can I simply use iSCSI to share an entire ZFS pool? I have read a lot of guides that all show sharing a zvol, if I DO use a zvol is it possible in the future to expand it and thereby increase the iSCSI volume that the remote computer will see?
      • A:
        • iSCSI is a SAN-protocol, and as such the CLIENT computer (windows) will control the filesystem, not the server which is running ZFS.
        • So how does this work: ZFS reserves a specific amount of space (say 20GB) in a zvol which acts as a virtual harddrive with block-level storage. This zvol is passed to iSCSI-target daemon which exports over the network. Finally your windows iSCSI driver presents a local disk, which you can then format with NTFS and actually use.
        • In this example, the server is not aware of any files stored on the iSCSI volume. As such you cannot share your entire pool; you can only share zvols or files. ZVOLs obey flush commands and as such are the preferred way to handle iSCSI images where data security/integrity is important. For performance bulk data which is less important, a file-based iSCSI disk is possible. This would just be a 8GB file or something that you export.
        • You can of course make zvol or file very big to share your data this way, but keep in mind only ONE computer can access this data at one time. So you wouldn't be running a NAS in this case, but only a SAN.
  • Tutorials
  • Misc
  • TrueNAS
    • Upload a disk image into a ZVol on your TrueNAS:
      • TrueNAS
        • Create a ZVol on your TrueNAS
        • Create a an iSCSI share of the ZVol on your TrueNAS.
          • If not sure, I would use: Sharing Platform : Modern OS: Extent block size 4k, TPC enabled, no Xen compat mode, SSD speed
      • Windows
        • Startup and connect the iSCSI share on your TrueNAS using the iSCSI initiator on Windows.
        • Mount target
          • Attach the hard disk you want to copy to the ZVol.
            or
          • Make sure you have a RAW disk image of the said drive instead.
        • Load your Disk Imaging software, on Windows.
        • Copy your source hard drive or send your RAW disk image to the target ZVol (presenting as a hard drive).
        • Release the ZVol from the iSCSI initiator.
      • TrueNAS
        • Disconnect the ZVol from the iSCSI share.
        • Create VM using the ZVol as its hard drive
      • Done
      • NB: This can also be used to make a backup of the ZVol
    • Change Block Size
      • iSCSI --> Configure --> Extents --> 'your name' --> Edit Extent --> Logical Block Size
      • This does both Logical and Physical.
    • If you cannot use a ZVol after using it in iSCSI
      • Check the general iSCSI config and delete related stuff in there. I have not idea what most of it is.

Dummy ZVols

These are useful if you need to re-use a ZVol attached to a VM somewhere else but you want keep the VM intact. The Dummy ZVol allows you to save a TrueNAS config.

Example Dummy ZVol Names:

As you can see the names referer to the type of disk they are and where they are being used. Although this is not important it might be useful from an admin point of view and you can make these names as complex as required as these are just my examples.

  • For VMs
    • Dummy_VM
    • Dummy_iSCSI_512
    • Dummy_iSCSI_4096
  • For iSCSI
    • legacy-os-512
    • modern-os-4096

Instructions

Just create a ZVol in your prefered location and maike it 1MB in size.

Dummy ISO

This can be used to maintain a CD-ROM device in a VM.

Create blank ISO using one of the following options and the name file Dummy.iso:

  1. Use MagicISO, UltraISO and save the empty ISO.
  2. Open text editor and save Dummy.iso
  3. Image a blank CD (if possible)
  4. Linux - use DD to make an image of an ISO file (not tested this).
  5. Download a blank ISO image.

 

Sharing Data

Permissions

  • Reset permissions on a Root Dataset
    • chown = change owner
    • Make sure you know why you are doing this as I dont know if it will casue any problems or fix any.
    • In TrueNAS, changes to permissions on top-level datasets are not allowed. This is a design decision, and users are encouraged to create datasets and share those out instead of sharing top-level datasets. Changes may still be made from the command-line. To change the root dataset default permissions, you need to create at least one dataset below the root in each of your pools. Alternatively, you can use rsync -auv /mnt/pool/directory /mnt/pool/dataset to copy files and avoid permission issues.
    • Edit Permissions is Greyed out and no ACL option on Dataset | TrueNAS Community
      • The webui / middleware does not allow changes to permissions on top-level datasets. This is a design decision. The intention is for users to create datasets and share those out rather than sharing top-level datasets. Changes may still be made from the command-line.
    • Reset Pool ACL Freenas 11.3 | TrueNAS Community
      • I ended up solving this using chown root:wheel /mnt/storage
    • I restored `Mag` to using root as owner. Not sure that is how it was at the beginning though, and this did not fix my VM issue.
      chown root:wheel /mnt/storage
  • You cannot use admin or root user account to access windows shares

Datasets

This is one of the most essential parts of TrueNAS, getting access to your files but for the beginner can be tricky.

ZVols via iSCSI

Copy (Replicate, Clone), Move, Delete; Datasets and ZVols

This is a summary of commands and research for completing these tasks.

  • Where possible you should do any data manipulation in the GUI, that is what it is there for.
  • Snapshots are not backups, they only record the changes made to a dataset, but they can be used to make backups through replication of the dataset.
  • Snapshots are great for ransomware protection and reverting changes made in error.
  • ZVols are a special Dataset type.
  • TrueNAS GUI (Data Protection) supports:
    • Periodic Snapshot Tasks
    • Replication Tasks (zfs send/receive)
    • Cloud Sync Tasks (AWS, S3, etc...)
    • Rsync Tasks (only scheduled, no manual option)
  • Commands:
    • zfs-rename.8 — OpenZFS documentation
      • Rename ZFS dataset.
      • -r : Recursively rename the snapshots of all descendent datasets. Snapshots are the only dataset that can be renamed recursively.
    • zfs-snapshot.8 — OpenZFS documentation
      • Create snapshots of ZFS datasets.
      • This page has an example of `Performing a Rolling Snapshot`which shows how to maintain a history of snapshots with a consistent naming scheme. To keep a week's worth of snapshots, the user destroys the oldest snapshot, renames the remaining snapshots, and then creates a new snapshot.
      • -r : Recursively create snapshots of all descendent datasets.
    • zfs-send.8 — OpenZFS documentation
      • Generate backup stream of ZFS dataset which is written to standard output.
      • -R : Generate a replication stream package, which will replicate the specified file system, and all descendent file systems, up to the named snapshot. When received, all properties, snapshots, descendent file systems, and clones are preserved.
      • -I snapshot : Generate a stream package that sends all intermediary snapshots from the first snapshot to the second snapshot. For example, -I @a fs@d is similar to -i @a fs@b; -i @b fs@c; -i @c fs@d. The incremental source may be specified as with the -i option.
      • -i snapshot|bookmark : Generate an incremental send stream. The incremental source must be an earlier snapshot in the destination's history. It will commonly be an earlier snapshot in the destination's file system, in which case it can be specified as the last component of the name (the # or @ character and following). If the incremental target is a clone, the incremental source can be the origin snapshot, or an earlier snapshot in the origin's filesystem, or the origin's origin, etc.
    • zfs-receive.8 — OpenZFS documentation
      • Create snapshot from backup stream.
      • zfs recv can be used as an alias for zfs receive.
      • Creates a snapshot whose contents are as specified in the stream provided on standard input. If a full stream is received, then a new file system is created as well. Streams are created using the zfs send subcommand, which by default creates a full stream.
      • If an incremental stream is received, then the destination file system must already exist, and its most recent snapshot must match the incremental stream's source. For zvols, the destination device link is destroyed and recreated, which means the zvol cannot be accessed during the receive operation.
      • -d : Discard the first element of the sent snapshot's file system name, using the remaining elements to determine the name of the target file system for the new snapshot as described in the paragraph above. I think this is just used to rename the root dataset in the snapshot before writing it to disk, ie.e copy and rename.
    • zfs-destroy.8 — OpenZFS documentation
      • Destroy ZFS dataset, snapshots, or bookmark.
      • filesystem|volume
        • -R : Recursively destroy all dependents, including cloned file systems outside the target hierarchy.
      • snapshots
        • -R : Recursively destroy all clones of these snapshots, including the clones, snapshots, and children. If this flag is specified, the -d flag will have no effect. Don't use this unless you know why!!!
        • -r : Destroy (or mark for deferred deletion) all snapshots with this name in descendent file systems. This is filtered destroy so rather that wiping everything related, you can just delete a specified set of snapshots by name.

ZFS Dataset Commands to Use

I have added sudo where required but you might not need to use this if you are using the root account (not recommended).

Moving a dataset is not as easy as moving a folder in Windows or a Linux GUI.

Rename/Move a Dataset (within the same Pool) - (zfs rename)

  • Rename/Move Datasets (Mounted/Unmounted) or offline ZVols within the same Pool only.
  • You should never copy/move/rename a ZVol while it is being used as the underlying VM might have issues.

The following commands will allows you to rename or move a Dataset or an offline ZVol. Pick one of the following or roll your own:

# Rename/Move a Dataset/ZVol within the same pool (it is not bothered if the dataset is mounted, but might not like an 'in-use' ZVol). Can only be used if the source and targets are in the same pool.
sudo zfs rename MyPoolA/Virtual_Disks/Virtualmin MyPoolA/Virtual_Disks/TheNewName
sudo zfs rename MyPoolA/Virtual_Disks/Virtualmin MyPoolA/TestFolder/Virtualmin
sudo zfs rename MyPoolA/Virtual_Disks/Virtualmin MyPoolA/TestFolder/TheNewName

Copy/Move a Dataset - (zfs send | zfs receive) (without Snapshots)

  • Copy unmounted Datasets or offline ZVols.
  • This will work across pools including remote pools.
  • If you delete the sources this process will then act as a move.
  • Recursive switch is optional for
    • a ZVol if you just want to copy the current disk.
    • normal datasets but unless you know why, leave it on.

The following will show you how to copy or move Datasets/ZVols.

  1. Send and Receive the Dataset/ZVol
    This uses STDOUT/STDIN stream. Pick one of the following or roll your own:
    sudo zfs send -R MyPoolA/Virtual_Disks/MyDatasetA | sudo zfs receive MyPoolA/Virtual_Disks/NewDatasetName
    sudo zfs send -R MyPoolA/Virtual_Disks/MyDatasetA | sudo zfs receive MyPoolB/Virtual_Disks/MyDatasetA
    sudo zfs send -R MyPoolA/Virtual_Disks/MyDatasetA | ssh <IP|Hostname> zfs receive RemotePool/Virtual_Disks/MyDatasetA (If no SSH trust is setup then you will be prompted for credentials of the remove server)
  2. Correct disks usage (ZVols only)
    This will change the ZVol from sparse (Thin) provisioned to `Thick` provisioned and therefore correct the used disk space. If you want the new ZVol to be `Thin` then you can ignore this step. Pick one of the following or roll your own:
    sudo zfs set refreservation=auto MyPoolA/Virtual_Disks/NewDatasetName
    sudo zfs set refreservation=auto MyPoolB/Virtual_Disks/MyDatasetA
    sudo zfs set refreservation=auto RemotePool/Virtual_Disks/MyDatasetA
  3. Delete Source Dataset/ZVol (optional)
    If you do this, then the process will turn from a copy into a move. This can be done in the TrueNAS GUI.
    sudo zfs destroy -R MyPoolA/Virtual_Disks/MyDatasetA

Copy/Move a Dataset - (zfs send | zfs receive) (Using Snapshots)

  • Copy mounted Datasets or online ZVols (although this is not best practise as VMs should be shut down first).
  • This will work across pools including remote pools.
  • If you delete the sources this process will then act as a move.
  • The use of snapshots is required when the Dataset is mounted or the ZVol is in use.

The following will show you how to copy or move Datasets/ZVols using snapshots.

  1. Create a `transfer` snapshot on the source
    sudo zfs snapshot -r MyPoolA/MyDatasetA@MySnapshot
  2. Send and Receive the Snapshot
    This uses STDOUT/STDIN stream. Pick one of the following or roll your own:
    sudo zfs send -R MyPoolA/Virtual_Disks/MyDatasetA@MySnapshot | sudo zfs receive MyPoolA/Virtual_Disks/NewDatasetName
    sudo zfs send -R MyPoolA/Virtual_Disks/MyDatasetA@MySnapshot | sudo zfs receive MyPoolB/Virtual_Disks/MyDatasetA
    sudo zfs send -R MyPoolA/Virtual_Disks/MyDatasetA@MySnapshot | ssh <IP|Hostname> zfs receive RemotePool/Virtual_Disks/MyDatasetA (If no SSH trust is setup then you will be prompted for credentials of the remove server)
  3. Correct Target ZVol disk usage (ZVols only)
    This will change the ZVol from `Thin` provisioned` to `Thick` provisioned and therefore correct the used disk space. If you want the new ZVol to be `Thin` then you can ignore this step. Pick one of the following or roll your own:
    sudo zfs set refreservation=auto MyPoolA/Virtual_Disks/NewDatasetName
    sudo zfs set refreservation=auto MyPoolB/Virtual_Disks/MyDatasetA
    sudo zfs set refreservation=auto RemotePool/Virtual_Disks/MyDatasetA
  4. Delete Source `transfer` Snapshot (optional)
    This will get rid of the Snapshot that was created only for this process. This can be done in the TrueNAS GUI.
    sudo zfs destroy -r MyPoolA/Virtual_Disks/MyDatasetA@MySnapshot
  5. Delete Source Dataset/ZVol (optional)
    If you do this, then the process will turn from a copy into a move. This can be done in the TrueNAS GUI.
    sudo zfs destroy -r MyPoolA/Virtual_Disks/MyDatasetA
  6. Delete Target `transfer` Snapshot (optional)
    You do not need this temporary Snapshot on your target pool.
    # Snapshot is on the local server
    sudo zfs destroy -r RemotePool/Virtual_Disks/MyDatasetA
    
    or
    
    # Snapshot is on a remote server
    ssh <IP|Hostname> zfs destroy RemotePool/Virtual_Disks/MyDatasetA (If no SSH trust is setup then you will be prompted for credentials of the remove server)

Copy/Move a  Dataset - (rsync) ????

Alternatively, you can use rsync -auv /mnt/pool/directory /mnt/pool/dataset to copy files and avoid permission issues. not sure where i got this from, maybe a bing search so is untested

Delete a Dataset's Snapshot(s)

A collection of delete commands.

Notice: there is a difference between -R and -r

.....

# Delete Dataset (recursively)
zfs destroy -R MyPoolA/MyDatasetA

# Delete Snapshot (recursively)
zfs destroy -r MyPoolA/MyDatasetA@yesterday

Move Files

Files are what you imagine but are not Datasets.

Incremental Backups (Rolling Backups)

  • Snapshots are NOT backups
    • They only record changes (file deltas), the previous snapshots and file system are required to build the full dataset.
    • These are good to protect from Ransomware.
    • Snapshots can be used to create backups on a remote pool.

Keeping data on a single pool in one location exposes it to risks like theft and natural or human disasters. Making regular backups of the entire pool is vital. ZFS provides a built-in serialization feature that can send a stream representation of the data to standard output. Using this feature, storing this data on another pool connected to the local system is possible, as is sending it over a network to another system. Snapshots are the basis for this replication (see the section on ZFS snapshots). The commands used for replicating data are zfs send and zfs receive.

An incremental stream replicates the changed data rather than the entirety of the dataset. Sending the differences alone takes much less time to transfer and saved disk space by not copying the whole dataset each time. This is useful when replicating over a slow network or one charging per transferred byte.

Although I refer to datasets you can use this on the pool itself by selecting the `root dataset`.

  • `zfs send` switches explained
    • -I
      • Sends all of the snapshots between the 2 defined snapshots as seperate snapshots.
      • This should be used for making a full copy of a dataset.
      • Generate a stream package that sends all intermediary snapshots from the first snapshot to the second snapshot.
      • I think it also sends the first and last snapshot as specified in the command).
      • If this is used, it will generate an incremental replication stream.
      • This succeeds if the initial snapshot already exists on the receiving side.
    • -i
      • Calculates the delta/changes between the 2 defined snapshots and then sends that as a snapshot.
      • If this is used, it will generate an incremental replication stream.
      • This succeeds if the initial snapshot already exists on the receiving side.
    • -p
      • Copies the dataset properties including compression settings, quotas, and mount points.
    • -R
      • This selects the dataset and all of its children (sub-datasets) rather than just the dataset itself.
      • Generate a replication stream package, which will replicate the specified file system, and all descendent file systems, up to the named snapshot. When received, all properties, snapshots, descendent file systems, and clones are preserved
      • If the -i or -I flags are used in conjunction with the -R flag, an incremental replication stream is generated. The current values of properties, and current snapshot and file system names are set when the stream is received. If the -F flag is specified when this stream is received, snapshots and file systems that do not exist on the sending side are destroyed. If the -R flag is used to send encrypted datasets, then -w must also be specified.
  • `zfs receive` switches explained
    • -d
      • If the -d option is specified, all but the first element of the sent snapshot's file system path (usually the pool name) is used and any required intermediate file systems within the specified one are created.
      • The dataset's path will be maintained (apart from the pool/root-dataset element removal) on the new pool but start from the target dataset. If any intermediate datasets need to be created, they will be.
      • If you leave this switch on whilst transfering between the same pool you might have issues.
      • Discard the first element of the sent snapshot's file system name, using the remaining elements to determine the name of the target file system for the new snapshot as described in the paragraph above.
      • The -d and -e options cause the file system name of the target snapshot to be determined by appending a portion of the sent snapshot's name to the specified target filesystem.
    • -e
      • If the -e option is specified, then only the last element of the sent snapshot's file system name (i.e. the name of the source file system itself) is used as the target file system name.
      • This takes the target dataset as the location to put this dataset into.
      • Discard all but the last element of the sent snapshot's file system name, using that element to determine the name of the target file system for the new snapshot as described in the paragraph above.
      • The -d and -e options cause the file system name of the target snapshot to be determined by appending a portion of the sent snapshot's name to the specified target filesystem.
    • -F
      • Be careful with this switch.
      • This is only required if the remote filesystem has had changes made to it.
      • Can be used to effectively wipe the target and replace with the send stream.
      • Its main benefit is that your automated backup jobs won't fail because an unexpected/unwanted change to the remote filesystem has been made.
      • Force a rollback of the file system to the most recent snapshot before performing the receive operation.
      • If receiving an incremental replication stream (for example, one generated by zfs send -R [-i|-I]), destroy snapshots and file systems that do not exist on the sending side.
    •  -u
      • Prevents mounting of the remote backup.
      • File system that is associated with the received stream is not mounted.
  • `zfs snapshot` switches explained
    • -r
      • Recursively create snapshots of all descendent datasets
  • `zfs destroy` switches explained
    • -R
      • Use this for deleting Datasets and ZVols.
      • Recursively destroy all dependents, including cloned file systems outside the target hierarchy.
    • -r
      • Use this for deleting snapshots.
      • Recursively destroy all children.

This is done by copying snapshots tot eh bnackup location......... ie.e -i/-I switches

  • The command example - Specify increments to send
    1. Create a new snapshot of the filesystem.
      sudo zfs snapshot -r MyPoolA/MyDatasetA@MySnapshot4
    2. Determine the last snapshot that was sent to the backup server. eg:
      @MySnapshot2
    3. Send all snapshots, from the snapshot found in step 2 up to the new snapshot created in step 1, to the backup server/location. They will be unmounted so be at very low risk of being modified.
      sudo zfs send -I @MySnapshot2 @MySnapshot4 | sudo zfs receive -u MyPoolB/Backup/MyDatasetA
      
      or
      
      sudo zfs send -I @MySnapshot2 @MySnapshot4 | ssh <IP/Hostname> zfs receive -u MyPoolB/Backup/MyDatasetA 
    4. dd

what abvout send -RI ???

 

 

 

 

Notes

These are various article I have used for my research.

General

this section mighgt get some information........

Datasets

When looking at managing datasets people can get files and datasets mixed up so quite a few of these links will have file operations instead of `ZFS Dataset` commands which is ok if you just want to make a copy of the files ate the files level with no snapshots etc..

  • General
    • Creating ZFS Data Sets and Compression - The Urban Penguin
      • ZFS file systems are created with the pools, data set allow more granular control over some elements of your file systems and this is where data sets come in. Data sets have boundaries made from directories and any properties set at that level will from to subdirectories below until a new data set is defined lower down. By default in Solaris 11 each users’ home directory id defined by its own data set.
        zfs list
        zfs get all rpool/data1
  • Copying/Moving/Cloning/Replication
    • ZFS Administration, Part XIII- Sending and Receiving Filesystems | Aaron Toponce | archive.org
      • An indepth document on ZFS send and receive.
      • Sending a ZFS filesystem means taking a snapshot of a dataset, and sending the snapshot. This ensures that while sending the data, it will always remain consistent, which is crux for all things ZFS. By default, we send the data to a file. We then can move that single file to an offsite backup, another storage server, or whatever. The advantage a ZFS send has over “dd”, is the fact that you do not need to take the filesystem offilne to get at the data. This is a Big Win IMO.
      • Again, I can’t stress the simplicity of sending and receiving ZFS filesystems. This is one of the biggest features in my book that makes ZFS a serious contender in the storage market. Put it in your nightly cron, and make offsite backups of your data with ZFS sending and receiving. You can send filesystems without unmounting them. You can change dataset properties on the receiving end. All your data remains consistent. You can combine it with other Unix utilities.
      • How to send snapshots to a RAW file and back:  Will this work with ZVols and RAW VirtualBox images ???
        # Create RAW Backup
        zfs snapshot tank/test@tuesday
        zfs send tank/test@tuesday > /backup/test-tuesday.img
        
        # Extract RAW Backup
        zfs receive tank/test2 < /backup/test-tuesday.img
      • This chapter is part of a larger book.
    • SOLVED - How to move dataset | TrueNAS Community
      • Q: I have 2 top level datasets and I want to make the minio_storage dataset a sublevel of production_backup. The following command did not work:
        mv /mnt/z2_bunker/minio_storage /mnt/z2_bunker/production_backup
      • So you use the dataset addressing, not the mounted location:
        zfs rename z2_bunker/minio_storage z2_bunker/production_backup/minio_storage
    • SOLVED - Fastest way to copy or move files to dataset? | TrueNAS Community
      • Q: I want to move my /mnt/default/media dataset files to /mnt/default/media/center dataset, to align with new Scale design. I’m used to Linux ways, rsync, cp, mv. Is there a faster/better way using Scale tools?
        • A:
          • winnielinnie (1)
            • Using the GUI, create a new dataset: testpool/media
            • Fill this dataset with some sample files under /mnt/testpool/media/
            • Using the command-line, rename the dataset temporarily
              • zfs rename testpool/media testpool/media1
            • Using the GUI, create a new dataset (again): testpool/media
            • Now there exists testpool/media1 and testpool/media
            • Finally, rename testpool/media1 to testpool/media/center
              • zfs rename testpool/media1 testpool/media/center
            • The dataset formerly known as testpool/media1 remains in tact, however, it is now located under testpool/media/center, as well as its contents under /mnt/testpool/media/center/
          • winnielinnie (2)
            • You can rsync directly from the Linux client to TrueNAS with a user account over SSH.
            • Something like this, as long as you've got your accounts, permissions, and datasets configured properly.
              rsync -avhHxxs --progress /home/shig/mydata/ shig@192.168.1.100:/mnt/mypool/mydata/
            • No need to make multiple trips through NFS or SMB. Just rsync directly, bypassing everything else.
          • Whattteva
            • Typically, it's done through ssh and instead of the usual:
              zfs send pool1/dataset1@snapshot | zfs recv pool2/dataset2
              
            • You do:
              zfs send pool1/dataset1@snapshot | ssh nas2 zfs recv nas2/dataset2
    • truenas move a dataset between pools - Search - Intelligent search from Bing
      • To move datasets between pools in TrueNAS, you can use one of the following methods:
        • Use the zfs command to duplicate in SSH environment, then export old pool and import new one.
        • Create the dataset on the second pool and cp/mv the data.
        • Use the zfs snapshot command to create a snapshot of the dataset you want to move.
        • Use rsync to copy the data from one dataset to the next and preserve the permissions and timestamps in doing so.
        • Use mv command to move the dataset.
    • How to move a dataset from one ZFS pool to another ZFS pool | TrueNAS Community
      • Q: I want to move "dataset A" from "pool A" completely over to "pool B". (Read some postings about this here on the forum, but I'm searching for an quiet "easy" way like: open "mc" in terminal, goto to "dataset A", press F6 and move it to "pool B").
        • A:
          • Rsync
            • cp/mv the data
            • ZFS Replicate
              zfs snapshot poolA/dataset@migrate
              zfs send -v poolA/dataset@migrate | zfs recv poolB/dataset
              
            • For local operations mv or cp are going to be significantly faster. And also easier for the op.
            • If using cp, remember to use cp -a (archive mode) so file dates get preserved and symlinks don't get traversed.
            • When using ZFS replicate, do consider using the "-p" argument. From the man page:
              • -p, --props
              • Include the dataset's properties in the stream. This flag is implicit when -R is specified. The receiving system must also support this feature. Sends of encrypted datasets must use -w when using this flag.
            • That mean the following would be the best to get most data and properties and so one transfered?
              zfs snapshot poolA/dataset@migrate
              zfs send -vR poolA/dataset@migrate | zfs recv poolB/dataset
            • Pool Cloning Script
              • Copies the snapshot history from the old pool too.
              • Have a look for reference only. Unless you know what this script does and how it works, do not use it.
            • I need to do essentially the same thing, but I'm going from and encrypted pool to another encrypted pool and want to keep all my snapshots. I wasn't sure how to do this in the terminal.
              • zfs snapshot poolA/dataset@migrate
                zfs send -Rvw poolA/dataset@migrate | zfs recv -d poolB
              • I then couldn't seem to load a key and change it to inherit from the new pool. However in TrueNAS I could unlock, then force the inheritance, which is fine, but not sure how to do this through the terminal. It was odd that I also couldn't directly load my key, I had to use the HASH in the dialog when you unselect use key.
    • SOLVED - Copy/Move dataset | TrueNAS Community
      • Pretty much i want to copy/move/suffle some datasets around, is this possible?
        • Create the datasets where you want them copy the data into them then delete the old one. When moving or deleting large amount of data be aware of your snapshots because they can end up taking up quite a bit of space.
        • Also create the datasets using the GUI and use the CLI to copy the data to the new location. This will be the fastest. Then once you verify your data and all your new shares you can delete the old datasets in the GUI.
        • Or, if you want to move all existing snapshots and properties, you may do something like this:
          • Create final source snapshot
            zfs snapshot -r Data2/Storage@copy
          • Copy the data:
            zfs send -Rv Data2/Storage@copy | zfs receive -F Data1/Storage
          • Delete created snapshots
            zfs destroy -r Data1/Storage@copy ; zfs destroy -r Data2/Storage@copy
    • Sending and Receiving ZFS Data - Oracle Solaris ZFS Administration Guide
      • The zfs send command creates a stream representation of a snapshot that is written to standard output. By default, a full stream is generated. You can redirect the output to a file or to a different system. The zfs receive command creates a snapshot whose contents are specified in the stream that is provided on standard input. If a full stream is received, a new file system is created as well. You can send ZFS snapshot data and receive ZFS snapshot data and file systems with these commands. See the examples in the next section.
        • You can use the zfs send command to send a copy of a snapshot stream and receive the snapshot stream in another pool on the same system or in another pool on a different system that is used to store backup data. For example, to send the snapshot stream on a different pool to the same system, use syntax similar to the following:
        • This page will tell you how to send and receive snapshots.
    • How to migrate a dataset from one pool to another in TrueNAS CORE ? - YouTube | HomeTinyLab
      • The guy is a bit slow but covers the whole process and seems only to use the TrueNAS CORE GUI with snapshots and replication tasks.
        • He then uses Rsync in a dry run to compare files in both locations to make sure they are the same.
    • How to use snapshots, clones and replication in ZFS on Linux | HowToForge
      • In this tutorial, I will show you step by step how to work with ZFS snapshots, clones, and replication. Snapshot, clone. and replication are the most powerful features of the ZFS filesystem.
      • Snapshot, clone, and replication are the most powerful features of ZFS. Snapshots are used to create point-in-time copies of file systems or volumes, cloning is used to create a duplicate dataset, and replication is used to replicate a dataset from one datapool to another datapool on the same machine or to replicate datapool's between different machines
    • linux - ZFS send/recv full snapshot - Unix & Linux Stack Exchange
      • Q:
        • I have been backing up my ZFS pool in Server A to Server B (backup server) via zfs send/recv, and using daily incremental snapshots.
        • Server B acts as a backup server, holding 2 pools to Server A and Server C respectively (zfs41 and zfs49/tank)
        • Due to hardware issues, the ZFS pool in Server A is now gone - and I want to restore/recover it asap.
        • I would like to send back the whole pool (including the snapshots) back to Server A, but I'm unsure of the exact command to run.
      • A:
        • There is a worked example with explantations.
    • ZFS send/receive over ssh on linux without allowing root login - Super User
      • Q: I wish to replicate the file system storage/photos from source to destination without enabling ssh login as root.
      • A:
        • This doesn't completely remove root login, but it does secure things beyond a full-featured login.
        • Set up an SSH trust by copying the local user's public key (usually ~/.ssh/id_rsa.pub) to the authorized_keys file (~/.ssh/authorized_keys) for the remote user. This eliminates password prompts, and improves security as SSH keys are harder to bruteforce. You probably also want to make sure that sshd_config has PermitRootLogin without-password -- this restricts remote root logins to SSH keys only (even the correct password will fail).
        • You can then add security by using the ForceCommand directive in the authorized_keys file to permit only the zfs command to be executed.
    • ZFS send single snapshot including descendent file systems - Stack Overflow
      • Q:  Is there a way to send a single snapshot including descendant file systems? 'zfs send' only sends the the top level file system even if the snapshot was created using '-r'. 'zfs send -R' sends the descendant file systems but includes all the previous snapshots, which for disaster recovery purposes consumes unnecessary space if the previous snapshots are not needed in the disaster recovery pool.
      • A: In any case, while you cannot achieve what you want in a direct way, you can reach the desired state. The idea is to prune your recovery set so that it only has the latest snapshot.
    • Sending a ZFS Snapshot | Oracle Solaris Help Center - You can use the zfs send command to send a copy of a snapshot stream and receive the snapshot stream in another pool on the same system or in another pool on a different system that is used to store backup data. For example, to send the snapshot stream on a different pool to the same system, use a command similar to the following example:
    • Sending a ZFS Snapshot | Oracle Solaris ZFS Administration Guide - You can use the zfs send command to send a copy of a snapshot stream and receive the snapshot stream in another pool on the same system or in another pool on a different system that is used to store backup data.
    • Receiving a ZFS Snapshot | Oracle Solaris ZFS Administration Guide - This page tells you how to receive streams from the `zfs send` command.
    • Sending and Receiving Complex ZFS Snapshot Streams | Oracle Solaris ZFS Administration Guide - This section describes how to use the zfs send -I and -R options to send and receive more complex snapshot streams.
    • Migrating Data With ZFS Send and Receive - Stephen Foskett, Pack Rat
      • I like ZFS Send and Receive, but I'm not totally sold on it. I've used rsync for decades, so I'm not giving it up anytime soon. Even so, I can see the value of ZFS Send and Receive for local migration and data management tasks as well as the backup and replication tasks that are typically talked about.
      • I’m a huge fan of rsync as a migration tool, but FreeNAS is ZFS-centric so I decided to take a shot at using some of the native tools to move data. I’m not sold on it for daily use, but ZFS Send and Receive is awfully useful for “internal” maintenance tasks like moving datasets and rebuilding pools. Since this kind of migration isn’t well-documented online, I figured I would make my notes public here.
  • Send to a File
    • SOLVED - Backup pool.... | TrueNAS Community
      • You can also redirect ZFS Send to a file and tell ZFS Receive to read from a file. This is handy when you need to rebuild a pool as well as for backup and replication.
      • In this example, we will send gang/scooby to a file and then restore that file later.
        1. Try to quiet gang/scooby
        2. Make a snapshot: zfs snap gang/scooby@ghost
        3. Send that snapshot to a file: zfs send gang/scooby@migrate > gzip /tmp/ghost.gz
        4. Do what you need to gang/scooby
        5. Restore the data to gang/scooby: gzcat /tmp/ghost.gz | zfs recv -F gang/scooby
        6. Promote gang/scooby’s new snapshot to become the dataset’s data: zfs rollback gang/scooby@ghost"
      • Q:
        • I wanted to know if I could "transfer" all the Snap I created to the gz files in one command?
        • Can I "move" them back to Pool / dataset in one command?
      • A:
        • Yeah, just snapshot the parent directory with the -r flag then send with the -R flag. Same goes for the receive command.
    • Best way to backup a small pool? | TrueNAS Community
      • The snapshot(s) live in the same place as the dataset. They are not some kind of magical backup that is stored in an extra location. So if you create a snapshot, then destroy the dataset, the dataset and all snapshots are gone.
      • You need to create a snapshot, replicate that snapshot by the means of zfs send ... | zfs receive ... to a different location, then replace your SSD (and as I read it create a completely new pool) and then restore the snapshot by the same command, just the other way round.
      • Actually the zfs receive ... is optional. You can store a snapshot (the whole dataset at that point in time, actually) in a regular file:
        zfs snapshot <pool>/<dataset>@now
        zfs send <pool>/<dataset>@now > /some/path/with/space/mysnapshot
      • Then to restore:
        zfs receive <pool>/<dataset> </some/path/with/space/mysnapshot
      • You need to do this for all datasets and sub-datasets of your jails individually. There are "recursive" flags to the snapshot as well as to the "send/receive" commands, though. I refer to the documentation for now.
      • Most important takeaway for @TECK and @NumberSix: the snapshots are stored in the pool/dataset. If you destroy the pool by exchanging your SSD you won't have any snapshots. They are not magically saved some place else.
  • Schdeuled Backups
    • No ECDSA host key is known for... | TrueNAS Community
      • Q: This is the message I get when I set up replication on our production FreeNAS boxes.
        Replication ZFS-SPIN/CIF-01 -> TC-FREENAS-02 failed: No ECDSA host key is known for tc-freenas-02.towncountrybank.local and you have requested strict checking. Host key verification failed.
      • A: I was trying to do this last night on a freshly installed FREENAS to experiment with the replication process on the same machine. I think the problem appears when the SSH service has not yet been started and you try to setup the replication task. You will get the error message when trying to request the SSH key by pressing the "SSH Key Scan" button. To sum up, you must do the following steps:..........
  • Backup Scripts
  • Incremental Backups
    • Chapter 22. The Z File System (ZFS) - 'zfs send' - Replication | FreeBSD Documentation Portal
      • Keeping data on a single pool in one location exposes it to risks like theft and natural or human disasters. Making regular backups of the entire pool is vital. ZFS provides a built-in serialization feature that can send a stream representation of the data to standard output. Using this feature, storing this data on another pool connected to the local system is possible, as is sending it over a network to another system. Snapshots are the basis for this replication (see the section on ZFS snapshots). The commands used for replicating data are zfs send and zfs receive.
      • This is an excellent read.
    • Chapter 22. The Z File System (ZFS) - 'zfs send' - Incremental Backups | FreeBSD Documentation Portal
      • zfs send can also determine the difference between two snapshots and send individual differences between the two. This saves disk space and transfer time.
      • This is an exellent read.
    • ZFS: send / receive with rolling snapshots - Unix & Linux Stack Exchange
      • Q: I would like to store an offsite backup of some of the file systems on a USB drive in my office. The plan is to update the drive every other week. However, due to the rolling snapshot scheme, I have troubles implementing incremental snapshots.
      • A1:
        • You can't do exactly what you want.
        • Whenever you create a zfs send stream, that stream is created as the delta between two snapshots. (That's the only way to do it as ZFS is currently implemented.) In order to apply that stream to a different dataset, the target dataset must contain the starting snapshot of the stream; if it doesn't, there is no common point of reference for the two. When you destroy the @snap0 snapshot on the source dataset, you create a situation that is impossible for ZFS to reconcile.
        • The way to do what you are asking is to keep one snapshot in common between both datasets at all times, and use that common snapshot as the starting point for the next send stream.
      • A2:
        • Snapshots have arbitrary names. And zfs send -i [snapshot1] [snapshot2] can send the difference between any two snapshots. You can make use of that to have two (or more) sets of snapshots with different retention policies.
        • e.g. have one set of snapshots with names like @snap.$timestamp (where $timestamp is whatever date/time format works for you (time_t is easiest to do calculations with, but not exactly easy to read for humans. @snap.%s.%Y%M%D%H%M%S provides both). Your hourly/daily/weekly/monthly snapshot deletion code should ignore all snapshots that don't begin with @snap.
    • Incremental backups with zfs send/recv | ./xai.sh - A guide on how to use zfs send/recv for incremental backups
    • Fast & frequent incremental ZFS backups with zrep – GRENDELMAN.NET
        • ZFS has a few features that make it really easy to back up efficiently and fasta dnt his guide goes through a lot of the settings in an easy to read mannor.
        • ZFS allows you to take a shapshot and send it to another location as a byte stream with the zfs send command. The byte stream is sent to standard output, so you can do with it what you like: redirect it to a file, or pipe it through another process, for example ssh. On the other side of the pipe, the zfs receive command can take the byte stream and rebuild the ZFS snapshot. zfs send can also send incremental changes. If you have multiple snapshots, you can specify two snapshots and zfs send can send all snapshots inbetween as a single byte stream.
        • So basically, creating a fast incremental backup of a ZFS filesystem consists of the following steps:
          1. Create a new snapshot of the filesystem.
          2. Determine the last snapshot that was sent to the backup server.
          3. Send all snapshots, from the snapshot found in step 2 up to the new snapshot created in step 1, to the backup server, using SSH:
            zfs send -I <old snapshot> <new snapshot> | ssh <backupserver> zfs receive <filesystem>
        • Zrep is a shell script (written in Ksh) that was originally designed as a solution for asynchronous (but continuous) replication of file systems for the purpose of high availability (using a push mechanism). 
          1. Zrep needs to be installed on both sides.
          2. The root user on the backup server needs to be able to ssh to the fileserver as root. This has security implications, see below.
          3. A cron job on the backup server periodically calls zrep refresh. Currently, I run two backups hourly during office hours and another two during the night.
          4. Zrep sets up an SSH connection to the file server and, after some sanity checking and proper locking, calls zfs send on the file server, piping the output through zfs receive:
            ssh <fileserver> zfs send -I <old snapshot> <new snapshot> | zfs receive <filesystem>
          5. Snapshots on the fileserver need not be kept for a long time, so we remove all but the last few snapshot in an hourly cron job (see below).
          6. Snapshots on the backup server are expired and removed according to a certain retention schedule (see below).
    • ZFS incremental send on recursive snapshot | TrueNAS Community
      • Q:
        • I am trying to understand ZFS send behavior, when sending incrementally, for the purposes of backup to another (local) drive.
        • How do people typically handle this situation where you would like to keep things incremental, but datasets may be created at a later time?
        • What happens to tank/stuff3, since it was not present in the initial snapshot set sent over?
      • A:
        • It's ignoring the incremental option and creating a full stream for that dataset. A comment from libzfs_sendrecv.c:
        • If you try to do a non recursive replication while missing the initial snapshot you will get a hard error -- the replication will fail. If you do a recursive replication you will see the warning, but the replication will proceed sending a full stream.
    • Understanding zfs send receive with snapshots | TrueNAS Community
      • Q:
        • I would like to seek some clarity with the usage of zfs send receive with snapshots. When i want to update the pool that i just sent to the other pool via ssh with incremental flag. It seems i can't get it to work. I want the original snapshot compared to new snapshot1 to send the difference to the remote server, is this correct?
      • Q:
        • Would i not still require the -dF switches for the receiving end ? 
      • A1:
        • Not necessarily. If the volume receiving the snapshots is set to "read only", then using the -F option shouldn't be necessary as it is intended to perform a Rollback.
          This is only required if the system on the remote has made changes to the filesystem.
      • A2:
        • If the -d option is specified, all but the first element of the sent snapshot's file system path (usually the pool name) is used and any required intermediate file systems within the specified one are created. It maintains the receiving pools name, rather than renaming it to resemble the sending pool name. So i consider it important since i call it "Pool2" .
      • Q:
        • One Other thing, just wish i could do the above, easily with the . Would make life much easier than typing it in to ssh.
      • A:
        • Surprise - you can. Look up Replication Tasks in the manual.

ZVols

  • General
    • ZFS Volume Manipulations and Best Practices
      • Typically when you want to move a ZVol from one pool to another, the best method is using zfs send | zfs receive (zfs recv)
      • However there are at least two scenarios when this would not be possible: when moving a ZVol from a Solaris pool to a OpenZFS pool or when taking a snapshot is not possible such as the case when there are space constrains.
      • Moving a ZVol using dd
    • Get ZVol Meta Information
      sudo zfs get all MyPoolA/Virtual_Disks/Virtualmin
      sudo zfs get volblocksize MyPoolA/Virtual_Disks/Virtualmin
    • FreeBSD – PSA: Snapshots are better than ZVOLs - Page 2 – JRS Systems: the blog
      • A lot of people new to ZFS, and even a lot of people not-so-new to ZFS, like to wax ecstatic about ZVOLs. But they never seem to mention the very real pitfalls ZVOLs present.
      • AFAICT, the increased performance is pretty much a lie. I’ve benchmarked ZVOLs pretty extensively against raw disk partitions, raw LVs, raw files, and even .qcow2 files and there really isn’t much of a performance difference to be seen. A partially-allocated ZVOL isn’t going to perform any better than a partially-allocated .qcow2 file, and a fully-allocated ZVOL isn’t going to perform any better than a fully-allocated .qcow2 file. (Raw disk partitions or LVs don’t really get any significant boost, either.)
      • This means for our little baby demonstration here we’d need 15G free to snapshot our 15G ZVol.
  • Copying/Moving
    • How to move VMs to new pool | TrueNAS Community
      • Does anyone know the best approach for moving VMs to a new pool?
        1. Stop your VM(s)
        2. Move the ZVOL(s)
          sudo zfs send <oldpool>/path/to/zvol | sudo zfs receive <newpool>/path/to/zvol
        3. Go to the Devices in the VM(s) and update the location of the disk(s).
        4. Start the VM(s)
        5. After everything is working to your satisfaction the zvols on the old pool can be destroyed as well as the automatic snapshot ("@--HEAD--", IIRC) that is created by the replication command.
      • The only thing I would point out, for anyone else doing this, is that the size of the ZVOLs shrunk when copying them to the new pool. It appears that when VMs and virtual disks are created, SCALE reserves the entire virtual disk size when sizing the ZVOL, but when moving the ZVOL, it compresses it so that empty space on the disk in the guest VM results in a smaller ZVOL. This confused me at first until I realized what was going on.
    • Moving a zvol | TrueNAS Community
      • Is the other pool on the same freeNAS server? If so, snapshot the zvol and replicate it to the other pool.
        sudo zfs snapshot -r pool/zvol@relocate
        sudo zfs send pool/zvol@relocate | sudo zfs receive -v pool/zvol
    • Copying raw disk image (from qnap iscsi) into ZVol/Volume - correct "of=" path? | TrueNAS Community
      • I have a VM image file locally on the TrueNas box, but need to copy the disk image file into a precreated Zvol.
      • Tested this one-liner out, it appears to work - you may need to add the -f <format> parameter if it's unable to detect the format automatically:
        qemu-img convert -O raw /path/to/your.file > /dev/zvol/poolname/zvolname
    • Moving existing VMs to another pool? | TrueNAS Community
      • Just did this today, it took a bit of digging through different threads to figure it out but here's the process. I hope it'll help someone else who's also doing this for the first time.
      • There are pictures to help you understand
      • uses send/receive
    • How to copy zvol to new pool? | TrueNAS Community
      • With zvols you do not need to take an explicit snapshot, the above commands will do that on the fly (assuming they are offline).
        sudo zfs send oldpool/path/to/zvol | sudo zfs receive newpool/path/to/zvol
  • Wrong size after moving
    • Command / option to assign optimally sized refreservation after original refereservation has been deleted · Issue #11399 · openzfs/zfs · GitHub
      # Correct ZVol Size - (Sparse/Thin) --> Thick
      set refreservation=auto rpool/zvol
      • Yes, it's that easy, but it seems to be barely known even among the developers. I saw it at the following page by accident while actually searching for something completely different:
      • I am also not sure whether this method will restore all behavior of automatically created refreservations. For example, according to the manual, ZFS will automatically adjust refreservation when volsize is changed, but (according to the manual) only when refreservation has not been tampered with in a way that the ZVOL has become sparse.
    • Moved zvol, different size afterwards | TrueNAS Community - Discusses what happens when you copy a ZVol and why the sizes are different than expected.
    • volsize
      # Correct ZVol size - (Sparse/Thin) --> Thick
      sudo zfs set volsize=50G MyPoolA/MyDatasetA
      • Not 100% successful.
      • This works to set the reservation and changes the provisioning type from Thin to Thick, but does not show as 50GB used (the full size of my ZVol).
      • In the TrueNAS GUI, the Parent dataset shows the extra 50GB used but the ZVol dataset still shows the 5GB thin provisioning value.
  • Resize a ZVol
    • This is a useful feature if your VM's hardrive has become full.
    • Resizing Zvol | TrueNAS Community
      • Is it possible to resize a ZVOl volume without destroying any data?
      • You can resize a ZVol with the following command:
        sudo zfs set volsize=new_size tank/name_of_the_zvol
        • To make sure that no issue occurs, you should stop the iSCSI or Virtual Machine it belongs to while performing the change.
        • Your VDEV needs sufficient free space.
      • VDEV advice
        • There is NO way to add disks to a vdev already created. You CAN increase the size of each disk in the vdev, by changing them out one by one, ie change the 4tb drives to 6tb drives. Change out each and then when they are all changed, modify the available space.
        • PS - I just realized that you said you do not have room for an ISCSI drive. Also, built into the ZFS spec is a caveat that you do NOT allow your ZVOL to get over 80% in use. If you do, it goes into storage recovery mode, which changes disk space allocation and tries to conserve disk space. Above 90% is even worse!!!!
  • Change Provisioning Type (Sparse|Thin/Thick)
  • Sector Size
    • All your Virtual Machine sector sizes should be on 4096 unless you need 512.
    • All your iSCSI sector sizes should be on 4096 unless you need 512.
  • VM and iSCSI Sector Size and Compression
    • Are virtual machine zvols created from the GUI optimized for performance? | TrueNAS Community
      • Reading some ZFS optimization guides they recommend to use recordsize/volblocksize = 4K and disable compression.
      • If you run a VM with Ext4 or NTFS, both having a 4k native block size, wouldn't it be best to use a ZVOL with an identical block size for the virtual disk? I have been doing this since I started using VMs, but never ran any benchmarks.
      • It doesn't matter what the workload is - Ext4 will always write 4k chunks. As will NTFS.
      • 16k is simply the default blocksize for ZVOLs as 128k is for datasets and most probably nobody gave a thought to making that configurable in the UI or changing it at al
    • ZFS Pool for Virtual Machines – Medo's Home Page
      • Running VirtualBox on ZFS pool intended for general use is not exactly the smoothest experience. Due to it's disk access pattern, what works for all your data will not work for virtual machine disk access.
      • First of all, you don't want compression. Not because data is not compressible but because compression can lead you to believe you have more space than you actually do. Even when you use fixed disk, you can run out of disk space just because some uncompressible data got written within VM
      • Ideally record size should match your expected load. In case of VirtualBox that's 512 bytes. However, tracking 512 byte records takes so much metadata that 4K records are actually both more space efficient and perform better
    • WARNING: Based on the pool topology, 16K is the minimum recommended record size | TrueNAS Community
      WARNING: Based on the pool topology, 16K is the minimum recommended record size. Choosing a smaller size can reduce system performance. 
      • This is the block size set for the ZVol not for the VM or iSCSI that sits on it.
      • You should stay with the default unless you really know what you are doing, in which case you would not be reading this message.
  • Compression
    • Help: Compression level (Tooltip)
      • Encode information in less space than the original data occupies. It is recommended to choose a compression algorithm that balances disk performance with the amount of saved space.
      • LZ4 is generally recommended as it maximizes performance and dynamically identifies the best files to compress.
      • GZIP options range from 1 for least compression, best performance, through 9 for maximum compression with greatest performance impact.
      • ZLE is a fast algorithm that only elminates runs of zeroes.
      • This tooltip implies that compression causes the disk access to be slower.
    • in a VM there are no files to see, if you do NOT thin/Sparse provision the space is all used up anyway so compression is a bit pointless.
    • It does not matter whether you 'Thin' or 'Thick' provision a ZVol, it is only when data is written to a block it actually takes up space, and it is only this data that can be compressed.
      • This behaviour is exactly the same as a dynamic disks in VirtualBox.
      • I do not know if ZFS is aware of the file system in the ZVol, I suspect it is only binary aware (i.e. block level).
    • When using NVMe, the argument that loading and uncompressing compressed data is quicker than loading normal data from the disk might not hold water. This could be true for Magnetic disks.
  • Quotas
    • Setting ZFS Quotas and Reservations - Oracle Solaris ZFS Administration Guide
      • You can use the quota property to set a limit on the amount of disk space a file system can use. In addition, you can use the reservation property to guarantee that a specified amount of disk space is available to a file system. Both properties apply to the dataset on which they are set and all descendents of that dataset.
      • A ZFS reservation is an allocation of disk space from the pool that is guaranteed to be available to a dataset. As such, you cannot reserve disk space for a dataset if that space is not currently available in the pool. The total amount of all outstanding, unconsumed reservations cannot exceed the amount of unused disk space in the pool. ZFS reservations can be set and displayed by using the zfs set and zfs get commands.

Snapshots

  • General
    • You cannot chain creating a snapshot with send an receive as it fails.
    • zfs - Do parent file system snapshot reference it's children datasets data or only their onw data? - Ask Ubuntu
      • Each dataset, whether child or parent, is its own file system. The file system is where files and directories are referenced and saved.
      • If you make a recursive snapshot for rpool, it doesn't create a single snapshot. It creates multiple snapshots, one for each dataset.
      • A very good explanation.
    • Datasets are in a lose hieracrchy and if you want to snapshot the dataset and it's sub-datasets, then you need to use the -R switch. Each dataset will be snapshotted seperately but the snapshots will all share the same name allowing them to be addressed as one.
  • Tutorials
    • How to create, clone, rollback, delete snapshots on TrueNAS - Server Decode - TrueNAS snapshots can help protect your data, and in this guide, you will learn steps to create, close, rollback, and delete TrueNAS snapshots using the GUI.
    • Some basic questions on TrueNAS replications - Visual Representation Diagram and more| TrueNAS Community
      • If you're a visual person, such as myself (curse the rest of this analytical world!), then perhaps this might help. Remember that a "snapshot" is in fact a read-only filesystem at the exact moment in time that the snapshot was taken.
      • This diagram is awesome.
      • Snapshots are not "stored". Without being totally technically accurate here, think about it like this: a block in ZFS can be used by one or more consumers, just like when you use a UNIX hardlink, where you have two or more filenames pointing at the same file contents (which therefore takes no additional space for the second filename and beyond).
      • When you take a snapshot, ZFS does a clever thing where it assigns the current metadata tree for the dataset (or zvol in your case) to a label. This happens almost instantaneously, because it's a very easy operation. It doesn't make a copy of the data. It just lets it sit where it was. However, because ZFS is a copy-on-write filesystem, when you write a NEW block to the zvol, a new block is allocated, the OLD block is not freed (because it is a member of the snapshot), and the metadata tree for the live zvol is updated to accommodate the new block. NO changes are made to the snapshot, which remains identical to the way it was when the snapshot was taken.
      • So it is really data from the live zvol which is "stored", and when you take a snapshot, it just freezes the metadata view of the zvol. You can then read either the live zvol or any snapshot you'd prefer. If this sounds like a visualization nightmare for the metadata, ... well, yeah.
      • When you destroy a ZFS snapshot, the system will then free blocks to which no other references exist.
  • Deleting
    • Deleting snapshots | TrueNAS Community
      • Q: Does anyone know the command line to delete ALL snapshots? 
      • A: It's possible to do it from the command line, but dangerous. If you mess up, you could delete ALL of your data!
        zfs destroy poolname/datasetname@%
        
        The % is the wildcard.
    • [Question] How to delete all snapshots from a specific folder? | Reddit
      • Q:
        • Recently I discovered my home NAS created 20.000+ snapshots in my main pool, way beyond the recommended 10000 limit and causing a considerable performance hit on it. After looking for the culprit, I discovered most of them in a single folder with a very large file structure inside (which I can't delete or better manage it because years and years of data legacy on it).
          • I don't want to destroy all my snapshots, I just want to get rid of them in that specific folder.
        • A1:
          • # Test the output first with:
            zfs list -t snapshot -o name | grep ^tank@Auto
            
            # Be careful with this as you could delete the wrong data:
            zfs list -t snapshot -o name | grep ^tank@Auto | xargs zfs destroy -r
        • A2:
          • You can filter snapshots like you are doing, and select the checkbox at the top left, it will select all filtered snapshots even in other pages and click delete, it should ask for confirmation etc. it will be slower than the other option mentioned here for CLI. If you need to concurrently administrate from GUI open another tab and enter GUI as the page where you deleted snapshots will hang until it’s done, probably 20-30 min.
      • How to delete all but last [n] ZFS snapshots? - Server Fault
        • Q:
          • 'm currently snapshotting my ZFS-based NAS nightly and weekly, a process that has saved my ass a few times. However, while the creation of the snapshot is automatic (from cron), the deletion of old snapshots is still a manual task. Obviously there's a risk that if I get hit by a bus, or the manual task isn't carried out, the NAS will run out of disk space.
          • Does anyone have any good ways / scripts they use to manage the number of snapshots stored on their ZFS systems? Ideally, I'd like a script that iterates through all the snapshots for a given ZFS filesystem and deletes all but the last n snapshots for that filesystem.
          • E.g. I've got two filesystems, one called tank and another called sastank. Snapshots are named with the date on which they were created: sastank@AutoD-2011-12-13 so a simple sort command should list them in order. I'm looking to keep the last 2 week's worth of daily snapshots on tank, but only the last two days worth of snapshots on sastank.
        • A1:
          • You may find something like this a little simpler
            zfs list -t snapshot -o name | grep ^tank@Auto | tac | tail -n +16 | xargs -n 1 zfs destroy -r
            • Output the list of the snapshot (names only) with zfs list -t snapshot -o name
            • Filter to keep only the ones that match tank@Auto with grep ^tank@Auto
            • Reverse the list (previously sorted from oldest to newest) with tac
            • Limit output to the 16th oldest result and following with tail -n +16
            • Then destroy with xargs -n 1 zfs destroy -vr
          • Deleting snapshots in reverse order is supposedly more efficient or sort in reverse order of creation.
            zfs list -t snapshot -o name -S creation | grep ^tank@Auto | tail -n +16 | xargs -n 1 zfs destroy -vr
          • Test it with
            ...|xargs -n 1 echo
        • A2
          • This totally doesn't answer the question itself, but don't forget you can delete ranges of snapshots.
            zfs destroy zpool1/dataset@20160918%20161107
          • Would destroy all snapshots from "20160918" to "20161107" inclusive. Either end may be left blank, to mean "oldest" or "newest". So you could cook something up that figures out the "n" then destroy "...%n"..
      • How to get rid of 12000 snapshots? | TrueNAS Community
        • Q:
          • I received a notification saying that I have over the recommended number of snapshots (12000+!!!).
          • I'm not quite sure how or why I would have this many as I don't have any snapshot tasks running at all.
          • The GUI allows me to see 100 snapshots at a time and bulk delete 100 at a time. But, even when I do this it fails to delete half of the snapshots because they have a dependent clone. It would take a very long time to go through 12000 and delete this way. So, am looking for a better way.
          • How can I safely delete all (or every one that I can) of these snapshots?
        • A:
          • In a root shell run
            zfs list -t snapshot | awk '/<pattern>/ { printf "zfs destroy %s\n", $1 }'
          • Examine the output and adjust <pattern> until you see the destroy statements you want. Then append to the command:
            zfs list -t snapshot | awk '/<pattern>/ { printf "zfs destroy %s\n", $1 }' | sh
      • Dataset is Busy - Cannot delete snapshot error

        • There are a couple of different things than can cause this error.
          1. A Hold is applied to a snapshot of that dataset.
          2. The ZVol is being used in a VM.
          3. The ZVol is being used in an iSCSI.
          4. The ZVol/Dataset is currently being used in a replication process.
        • What is a Hold? This is method of protecting a snapshot from modification and deletion.
          • Navigate to the snapshot, exaned the details and you will see the option.
        • How to fix 'dataset is busy' caused by this error.
          • Find the snapshot with the 'Hold' option set by using this command which will show you the 'Holds'.
            sudo zfs list -r -t snap -H -o name <Your Pool>/Virtual_Disks/Virtualmin | sudo xargs zfs holds
          • Remove the 'Hold' from the relevant snapshot.
          • You can now delete the ZVol/Dataset
          • Done.
    • Deleting Snapshots. | TrueNAS Community
      • Q: My question is, 12 months down the line if I need to delete all snapshots, as a broad example would it delete data from the drive which was subsequently added since snapshots were created?
      • A: No. The data on the live filesystem (dataset) will not be affected by destroying all of the dataset's snapshots. It means that the only data that will remain is that which lives on the live filesystem. (Any "deleted" records that only existed because they still had snapshots pointing to them will be gone forever. If you suddenly remember "Doh! That one snapshot I had contained a previously deleted file which I now realize was important!" Too bad, whoops! It's gone forever.)
      • Q:Also when a snapshot is deleted does it free up the data being used by that snapshot? 
      • A: The only space you will liberate are records that exclusively belong to that snapshot. Otherwise, you won't free up such space until all snapshots (that point to the records in question) are likewise destroyed.
        See this post for a graphical representation. (I realize I should have added a fourth "color" to represent the "live filesystem".)
  • Promoting
    • Clone and Promote Snapshot Dataset | Documentation Hub
    • System updated to 11.1 stable: promote dataset? | TrueNAS Community
      • Promote Dataset: only applies to clones. When a clone is promoted, the origin filesystem becomes a clone of the clone making it possible to destroy the filesystem that the clone was created from. Otherwise, a clone can not be destroyed while its origin filesystem exists.
    • zfs-promote.8 — OpenZFS documentation
      • Promote clone dataset to no longer depend on origin snapshot.
      • The zfs promote command makes it possible to destroy the dataset that the clone was created from. The clone parent-child dependency relationship is reversed, so that the origin dataset becomes a clone of the specified dataset.
      • The snapshot that was cloned, and any snapshots previous to this snapshot, are now owned by the promoted clone. The space they use moves from the origin dataset to the promoted clone, so enough space must be available to accommodate these snapshots. No new space is consumed by this operation, but the space accounting is adjusted. The promoted clone must not have any conflicting snapshot names of its own. The zfs rename subcommand can be used to rename any conflicting snapshots.

 

Files

There are various GUIs and apps you can use to move files on your TrueNAS with, mileage may vary. Moving files is not the same as moving Datasets or ZVols and you must make sure no-one is using the files that your are manipulating.

  • GUIs
    • Midnight Commander (mc)
    • TrueCommand 2.0
    • Other SSH software
      • FlashFXP
      • WinSCP
    • Graphical file manager application/plugin? | TrueNAS Community
      • I was doing a search to see if there was a graphical file manager that, for example, Qnap offers with their NAS units/in their NAS operating system and so far, I haven't really been able to find one.
      • feature requests:
      • How do people migrate select data/files between TrueNAS servers then?  :  They use replications, ZFS to ZFS.
      • If you want to leverage ZFS's efficiency ("block-based", not "file-based") and "like for like" copy of a dataset/snapshot, then ZFS-to-ZFS is what to use.
      • In your case, you want to copy and move files around like a traditional file manager ("file-based"), so your options are to use the command-line, or your file browser, and move/copy files from one share to another. Akin to local file operations, but in your case these would be network folders, not local folders.
      • As for the built-in GUI file manager for TrueNAS, it's likely only going to be available for SCALE, and possibly only supports local file management (not server-to-server.) It appears to be backlogged, and not sure what iXsystems' priority is.
      • The thread ia a bit of a discussion abotu this subject aswell.
  • CLI
    • Fastest way to copy (or move) files between shares | TrueNAS Community
      • John Digital
        • The most straightforward way to do this is likely mv. Issue this command at the TN host terminal. Adjust command for your actual use case.
          mv /mnt/tank/source /mnt/tank/destination
        • However it wont tell you progress or anything. So a fancier way is to go like this. Again adjust your use case. The command is included with the --dry-run flag.. When your sure youve got it right remove the --dry-run.
          rsync -avzhP --remove-source-files /mnt/tank/dataset1 /mnt/tank/dataset2 --dry-run
        • Then after you are satisfied its doing what you need, run the command without the --dry-run flag, youll need to do this to remove all the empty directories (if any).
          find /mnt/tank/dataset1 -type d -empty -delete
      • Pitfrr
        • You could also use mc in the terminal. It gives you an interface and works even with remote systems.
      • Basil Hendroff
        • If what you're effectively doing is trying to rename the original dataset, the following approach will not move any files at all:
          1. Remove the share attached to the dataset.
          2. Rename the dataset e.g. if your pool is named tank then zfs rename tank/old_dataset_name tank/new_dataset_name
          3. Set up the share against the renamed dataset.
      • macmuchmore
        • ll
          mv /mnt/Pool1/Software /mnt/Pool1/Dataset1/
      • The ultimate guide to manage your files via SSH
        • Learning how to manage files in SSH is quite easy. Commands are simple; only a simple click is needed to run and execute.
        • All commands are explained.
        • There is a downlaodable PDF version.

Managing Hardware

This section deals with ther times you need to interact with the hardware.

Hard Disks

  • Get boot drive serials
    • Storage --> Disks
  • Changing Drives
  • Testing
    • Managing S.M.A.R.T. Tests | Documentation Hub - Provides instructions on running S.M.A.R.T. tests manually or automatically, using Shell to view the list of tests, and configuring the S.M.A.R.T. test service.
    • Hard Drive Burn-in Testing | TrueNAS Community - For somebody (such as myself) looking for a single cohesive guide to burn-in testing, I figured it'd be nice to have all of the info in one place to just follow, with relevant commands. So, having worked my way through reading around and doing my own testing, here's a little more n00b-friendly guide, written by a n00b.
    • Manual disk test
      • (Storage --> disks)
        • Select the disk you want to test
        • Click ‘Manual Test’
        • Select the relevant test
        • Click ok
      • When you start a manual test, the reponse might take a moment.
      • Not all drives support ‘Conveyance Self-test’.
      • If your RAID card is not a modern one, it might not pass the tests correctly to the drive (also ypu should not use a RAID card).
      • When you run a long test, make a note of the expected finish time as it could be a while before you see the `Manual Test Summary`:
        Expected Finished Time:
        sdb: 2022-11-07 19:32:45
        sdc: 2022-11-07 19:47:45
        sdd: 2022-11-07 19:37:45
        sde: 2022-11-07 20:02:45
        You can monitor the progress and the fact the drive is working by clicking on the task manager icon (top right, looks like a clipboard)
    • Test disk read/write speed
    • Quick question about HDD testing and SMART conveyance test | TrueNAS Community
      • Q: I have a 3 TB SATA HDD that was considered "bad" but I have reasons to believe that it was the controller card of the computer it came from that was bad.
      • If you look at the smartctl -a data on your disk it tells you exactly how many minutes it takes to complete a test. Typical speeds are 6-9 hours for 3-4TB drives.
      • Conveyance is wholly inadequate for your needs.
      • I'd consider your disk good only if all smart data on the disk is good, badblocks for a few passes finds no problems, and a long test finishes without errors.
    • How to View SMART Results in TrueNAS in 2023 - WunderTech - This tutorial looks at how to view SMART results in TrueNAS. There are also instructions how to set up SMART Tests and Email alerts!
  • Troubleshooting
  • Identify Drives
    • Power down the TrueNAS and physically read the serials on the drives before powering backup again.
    • Drive identification in TrueNAS is done by drive serials.
    • Linux drive and partition names
      • The Linux drive mount names (eg sda, sdb, sdX) are not bonded to the SATA port or drive so can change. These values are based on the load order of the drives and nothing else and therefor cannot be used for drive identification.
      • C.4. Device Names in Linux - Linux disks and partition names may be different from other operating systems. You need to know the names that Linux uses when you create and mount partitions. Here's the basic naming scheme:
      • Names for ATA and SATA disks in Linux - Unix & Linux Stack Exchange - Assume that we have two disks, one master SATA and one master ATA. How will they show up in /dev?
    • How to match ata4.00 to the apropriate /dev/sdX or actual physical disk? - Ask Ubuntu
      • Some of the code mentioned
        dmesg | grep ata
        egrep "^[0-9]{1,}" /sys/class/scsi_host/host*/unique_id
        $ ls -l /sys/block/sd*
        
    • linux - Mapping ata device number to logical device name - Super User
      • I'm getting kernel messages about 'ata3'. How do I figure out what device (/dev/sd_) that corresponds to?
        ls -l /sys/block/sd*
    • SOLVED - how to find physical hard disk | TrueNAS Community
      • Q: If it is reported that sda S4D0GVF2 is broken, how to know which physical hard disk it corresponds to.
      • A:
        • Serial number is marked on physical disk. I usually have a table with all serial numbers for each disk position, so is easy find the broken disk.
        • If you have drive activity LED's, you can generate artificial activity. Press CTRL + C to stop it when you're done.
          dd if=/dev/sda of=/dev/null bs=1M count=5000       
        • Use the 'Description`field in the GUI to record the location of the disk.
  • Misc

Moving Server

This is a lot easier than you think.

Apps

Apps will become an essential part of TrueNAS becoming more of a platform than just a NAS.

  • General
    • Apps when you set them up, can either leave all data in the Docker container or set mount points in your ZFS system.
    • Use LZ4 on all datasets except things that are highly compresed such as movies. (jon says: I have not decided about ZVols and compression yet)
    • Apps | Documentation Hub
      • Expanding TrueNAS SCALE functionality with additional applications.
      • The first time you open the Applications screen, the UI asks you to choose a storage pool for applications.
      • TrueNAS creates an `ix-applications` dataset on the chosen pool and uses it to store all container-related data. The dataset is for internal use only. Set up a new dataset before installing your applications if you want to store your application data in a location separate from other storage on your system. For example, create the datasets for the Nextcloud application, and, if installing Plex, create the dataset(s) for Plex data storage needs.
      • Special consideration should be given when TrueNAS is installed in a VM, as VMs are not configured to use HTTPS. Enabling HTTPS redirect can interfere with the accessibility of some apps. To determine if HTTPS redirect is active, go to System Settings --> GUI --> Settings and locate the Web Interface HTTP -> HTTPS Redirect checkbox. To disable HTTPS redirects, clear this option and click Save, then clear the browser cache before attempting to connect to the app again.
  • ix-applications
    • ix-applications is the dataset in which TrueNAS stores all of the Docker images.
    • It cannot be renamed.
    • You can set the pool the apps use for the internal storage
      • Apps --> Settings --> Choose Pool
    • Move apps (ix-applications) from one pool to another
      • Apps --> Settings --> Choose Pool --> Migrate applications to the new pool
      • Moving ix-applications with installed apps | TrueNAS Community - I have some running apps, like Nextcloud, traefik, ghost and couple more and I would like to move ix-applications from one pool to another. Is it possible without breaking something in the process?
  • General Tutorials
  • Individual Apps
  • Individual Apps (on Virtual Machines) .... might move this
  • Upgrading
  • TrueCharts (an additional Apps Catalogue)
    • General
      • This is not the same catalog of apps that are already available in your TrueNAS SCALE.
      • TrueCharts - Your source For TrueNAS SCALE Apps
      • Meet TrueCharts – the First App Catalog for TrueNAS SCALE - TrueNAS - Welcome to the Open Storage Era
        • The First Catalog Store for TrueNAS SCALE that makes App management easy.
        • Users and third parties can now build catalogs of application charts for deployment with the ease of an app store experience.
        • These catalogs are like app stores for TrueNAS SCALE.
        • iXsystems has been collaborating and sponsoring the team developing TrueCharts, the first and most comprehensive of these app stores.
        • Best of all, the TrueCharts Apps are free and Open Source.
        • TrueCharts was built by the founders of a group for installation scripts for TrueNAS CORE, called “Jailman”. TrueCharts aims to be more than what Jailman was capable of: a user-friendly installer, offering all the flexibility the average user needs and deserves!
        • Easy setup instructions in the video
    • Setting Up
      • Getting Started with TrueCharts | TrueCharts
        • Below you'll find recommended steps to go from a blank or fresh TrueNAS SCALE installation to using TrueCharts with the best possible experience and performance as determined by the TrueCharts team. It does not replace the application specific guides and/or specific guides on certain subjects (PVCs, VPN, linking apps, etc) either, so please continue to check the app specific documentation and the TrueNAS SCALE specific guides we've provided on this website. If more info is needed about TrueNAS SCALE please check out our introduction to SCALE page.
        • Once you've added the TrueCharts catalog, we also recommend installing Heavyscript and configuring it to run nightly with a cron job. It's a bash script for managing Truenas SCALE applications, automatically update applications, backup applications datasets, open a shell for containers, and many other features. 
      • Adding TrueCharts Catalog on TrueNAS SCALE | TrueCharts
        • Catalog Details
          • Name: TrueCharts
          • Repository: https://github.com/truecharts/catalog
          • Preferred Trains: enterprise, stable, operators
            • Others are available: incubator, dependency
            • Type each one manually that you want adding
            • i just stick to stable.
          • Branch: main
    • Errors
      • If you are stuck at 40% (usually Validating Catalog), just leave it a while as the process can take a long time.
      • [EFAULT] Kubernetes service is not running.

Virtualisation

KVM

  • VM settings are stored in the TrueNAS config and not the ZVol.
  • All your Virtual Machine sector sizes should be on 4096 unless you need 512.
  • Sites
  • General
    • KVM pre-assigns RAM, it is not dynamic, possibly to secure ZFS. The new version of TrueNAS allows you to set minimum and maximum RAM values now. I am not sure if this is truely dynamic.
      • I have noticed 2 fields during the VM setup but I am not sure how they apply.
        • Memory Size (Examples: 500 KiB, 500M, 2 TB) - Allocate RAM for the VM. Minimum value is 256 MiB. This field accepts human-readable input (Ex. 50 GiB, 500M, 2 TB). If units are not specified, the value defaults to bytes.
        • Minimum Memory Size - When not specified, guest system is given fixed amount of memory specified above. When minimum memory is specified, guest system is given memory within range between minimum and fixed as needed.
    • Adding and Managing VMs | Documentation hub - Provides instructions adding or managing a virtual machine (VM) and installing an operating system in the VM.
    • TrueNAS Scale Virtualization Features and How To Get Started Building VM's - YouTube | Lawrence Systems
      •  Tom goes through setting up a Virtual Machine in TrueNAS and it is easy to follow and understand.
      • The KVMs network is by its nature blocked from seeing the host. This is good security but cannot be turned off.
    • Which hypervisor does TrueNAS SCALE use? | TrueNAS Community
      • = KVM
      • Also their is an indepth discussion on how KVM uses Zvols
    • TPM Support
    • Can TrueNAS Scale Replace your Hypervisor? - YouTube | Craft Computing
      • The amount of RAM you specify for the VM is fixed and their is no dynamic mangement of this even though KVM supports it.
      • VirtIO drivers are better (and preferred) as they allow direct access to hardware rather than going through an emulation layer.
      • Virtual HDD Drivers for UEFI
        • AHCI
          • Is nearly universally compatible out of the box with every operating system as it is also just emulating physical hardware.
          • SATA limitations and speed will apply here so you will be limited to 6GB/s connectivity on you virtual disks.
        • VirtIO
          • Allows VM client to access block storage directly from the host without the need for system calls to the hypervisor. In otherwords a client VM can access the block storage as if it were directly attached.
          • VirtIO drivers are rolled into most Linux distros making installation pretty straight forward.
          • For windows clients you will need to install a compatible linux driver before you're able to install the OS.
      • Virtual NIC Drivers
        • Intel e82585 (e1000)
          • Intel drivers are universally supported but you are limited to the emulated hardware speeds of 1GB/s
        • VirtIO
          • Allows direct access to the network adapter used by your host meaning you are only limited by the speed of your physical link and you can access the link without making system calls to the hypervisor layer which means lower latency and faster throughput
          • VirtIO drivers are rolled into most Linux distros making installation pretty straight forward.
          • For windows clients you will need to install a compatible Linux driver before you're able to install the OS.
      • Additional VM configurations can be done later after the wizard.
    • Windows VirtIO Drivers - Proxmox VE - Download link and further explanations of the drivers here.
    • Accessing NAS From a VM | Documentation Hub - Provides instructions on how to create a bridge interface for the VM and provides Linux and Windows examples so you can access the NAS from your VM.
  • Setup
    • Configuring Virtualization and Apps in TrueNAS SCALE | Documentation Hub
      • Provides general information on setting up virtual machines and applications on TrueNAS SCALE.
      • Configuring TrueNAS SCALE to work with virtualized features, such as virtual machines (VMs) and applications, is a part of the setup process that when optimized takes advantage of the network storage capabilities that SCALE offers.
  • Resize VM Disks
    • Resize Ubuntu VM Disk on TrueNAS Scale · GitHub
      1. Shutdown the target VM
      2. Locate the zvol where the storage is allocated in the Storage blade in the TrueNAS Scale Web UI
      3. Resize the zvol by editing it-this can ONLY be increased, not shrunk!
      4. Save your changes
      5. Start your target VM up again
      6. Log in to the VM
      7. Execute the growpart command, ie. sudo growpart /dev/vda 2
      8. Execute the resize2fs command, ie. sudo resize2fs /dev/vda2
      9. Verify that the disk has increased in size using df -h
  • RAW Disk Images
    • You can use RAW images for virtual disks
    • Are these as goos as ZVols?
    • What are the Pros/Cons?
  • Use VirtualBox (VDI), Microsoft (VHD) or VMWare virtual disks (VMDK)
    • You cannot directly use these disk formats on TrueNAS KVM only uses ZVols (ZFS Volumes).
    • You can convert the disk images to ZVols but it is awkward.
  • Convert a Virtual Disk to a ZVOL / Importing
  • Virtio Drivers
  • CD-ROM
    • Error while creating the CDROM device | TrueNAS Community
      • Q: When i try to make a VM i get this message every time
        Error while creating the CDROM device. [EINVAL] attributes.path: 'libvirt-qemu' user cannot read from '/mnt/MAIN POOL/Storage/TEST/lubuntu-18.04-alternate-amd64.iso' path. Please ensure correct permissions are specified.
      • A: I created a group for my SMB user and added libvirt-qemu to the group now it works :}
    • Cannot eject CD-ROM
      1. Power down the VM and delete the CD-ROM, there is no eject option.
      2. Try Changing the order so that Disk is before CDROM.
      3. Use a Dummy.ISO (an empty ISO).
    • Use a real CD-ROM drive
    • Stop booting from a CDROM
      • Delete the device from the VM.
      • Attach a Dummy/Blank iso.
      • Changing the boot number to be last doesn't work.
  • CPU Pinning / NUMA (Non-Uniform Memory Access)
  • Permissions
  • noVNC - Does not have copy and paste
    • Use SSH/PuTTY
    • Use SPICE that way you have clipboard sharing between host & guest
    • Run 3rd Party Remote Desktop software in the VM.
  • Networking
    • I want TrueNAS to communicate with a virtualised firewall even when there is no cable connected to the TrueNAS’s physical NIC | TrueNAS Community
      • No:
        • This is by design for security and there is noway to change this behaviour.
        • Tom @ Lawrence Systems has asked for this as an option (or at least mentioned it).
      • This is still true for TrueNAS SCALE
    • Can not visit host ip address inside virtual machine | TrueNAS Community
      • You need to create a bridge. Add your primary NIC to that BRIDGE and assign your VM to the BRIDGE instead of the NIC itself.
      • To set up the bridge for your main interface correctly from the WebGUI you need to follow specific order of steps to not loose connectivity:
        1. Set up your main interface with static IP by disabling DHCP and adding IP alias (use the same IP you are connected to for easy results)
        2. Test Changes and then Save them (important)
        3. Edit your main interface, remove the alias IP
        4. Don't click Test Changes
        5. Add a bridge, name it something like br0, select your main interface as a member and add the IP alias that you had on main interface
        6. Click Apply and then Test Changes
        7. It will take longer to apply than just setting static IP, you can even get a screen telling you that your NAS in offline but just wait - worst case scenario TrueNas will revert to old network settings.
        8. After 30sec you should see an option to save changes.
        9. After you save them you should see both your main interface and new bridge active but bridge should have the IP
        10. Now you just assign the bridge as an interface for your VM.
    • SOLVED - No external network for VMs with bridged interface | TrueNAS Community
      • I hope somebody here has pointers for a solution. I'm not familiar with KVM so perhaps am missing an obvious step.
      • Environment: TrueNAS SCALE 22.02.1 for testing on ESXi with 2x VMware E1000e NICs on separate subnets plus bridged network. Confirmed that shares, permissions, general networking, etc. work.
      • Following the steps in the forum, this Jira ticket, and on YouTube I'm able to setup a bridged interface for VM's by assigning the IP to the bridged interface instead of the NIC. Internally this seems to work as intended, but no matter what I try, I cannot get external network connections to work from and to the bridged network.
      • When I remove the bridged interface and assign the IP back to the NIC itself, external connections are available again, I can ping in and out, and the GUI and shares can be contacted.
  • System Clock / System Time / Guest Timing Management
    • Leaving the "System Clock" on "Local" is best and works fine with Webmin/Virtualmin.
    • When you start a KVM, the time (UTC/Local) from your Host is used as the start time for the emulated RTC of the Guest.
    • You can update the Guest RTC as required and it will not affect the Host.
    • Chapter 8. KVM Guest Timing Management Red Hat Enterprise Linux 7 | Red Hat Customer Portal
      • Virtualization involves several challenges for time keeping in guest virtual machines.
      • Guest virtual machines without accurate time keeping may experience issues with network applications and processes, as session validity, migration, and other network activities rely on timestamps to remain correct.
      • KVM avoids these issues by providing guest virtual machines with a paravirtualized clock (kvm-clock).
      • The mechanics of guest virtual machine time synchronization. By default, the guest synchronizes its time with the hypervisor as follows: 
        • When the guest system boots, the guest reads the time from the emulated Real Time Clock (RTC).
        • When the NTP protocol is initiated, it automatically synchronizes the guest clock. Afterwards, during normal guest operation, NTP performs clock adjustments in the guest.
    • I'm experiencing timer drift issues in my VM guests, what to do? | FAQ - KVM
      • Maemo docs state that it's important to disable UTC and set the correct time zone, however I don't really see how that would help in case of diverging host/guest clocks.
      • IMHO much more useful and important is to configure properly working NTP server (chrony recommended, or ntpd) on both host and guest.
    • linux - Clock synchronisation on kvm guests - Server Fault
      • Fundamentally the clock is going to drift some, I think there is a limit to what can be done at this time.
      • You say that you don't run NTP in the guests but I think that is what you should do,
      • The best option for a precise clock on the guest is to use the kvm-clock source (pvclock) which is synchronized with clock's host.
      • Here is a link to the VMware paper Timekeeping in VMware Virtual Machines (pdf - 2008)
    • KVM Clocks and Time Zone Settings - SophieDogg
      • So the other day there was an extended power outage down at the dogg pound, and one of my non-essential server racks had to be taken off-line. This particular server rack only has UPS battery backup, but no generator power (like the others), and upon reboot, the clocks in all my QEMU Linux VM’s were wrong! They kept getting set to UTC time instead of local time… After much searching and testing, I finally found out what was necessary to fix this issue.
      • Detailed command line solution for this problem.
    • VM - Windows Time Wrong | TrueNAS Community
      • Unix systems run their clock in UTC, always. And convert to and from local time for output/input of dates. It's a multi user system - so multiple users can each have their own timezone settings.

Cloned VMs are not clones, they are snapshots!

  • Do NOT use the 'Clone' button and expect an independent clone of your VM.
  • This functionality is simliar to snapshots and how they work in VirtualBox, except here, TrueNAS bolts a separate KVM instance on the newly created snapshot and presents it as a new KVM.
  • This should only be used for testing new features and things out on existing VMs.
  • TrueNAS should rename the button 'Clone' --> 'Snapshot VM' as this is a better description.

I had to look into this because I assumed the 'Clone' button made a full clone of the VM, it does not.

I will outline what happens and what you get when you 'Clone' a VM.

  1. Click the 'Clone' button.
  2. TN creates a snapshot of the VM's ZVol.
  3. TN clones this snapshot to a new ZVol.
  4. TN creates a new VM using the meta settings from the 'parent' VM and the newly created ZVol.

FAQ

  • You cannot delete a Parent VM if it has Child/Cloned VMs. You need to delete the children first.




  • You cannot delete a Parent ZVol if it has Child/Cloned ZVols. You need to delete the children first.


  • Deleting a Child/Cloned VM (with the option 'Delete Virtual Machine Data') only deletes the ZVol, not the snapshot that it was created from on the parent.
  • When you delete the Parent VM (with the option 'Delete Virtual Machine Data'), all the snapshots are deleted as you would expect.
  • Are the child VM (meta settings only) linked or is it just the ZVols.
    • I am assuming the ZVols are linked, the meta information is not.
  • How can I tell if the ZVol is a child of another?
    1. Select the ZVol in the 'Datasets' section. It will show a 'Promote' button next to the delete button.
    2. The naming convention of the ZVol will help. The clone's name that you selected will be added to the end of the parents name to give you the full name of the ZVol. So all children of that parent, will start with the parents name.
  • Don't manually rename the ZVols, as this helps visually identify to which parent it belongs.
  • The only true way to get a clone of a VM is it use send|recv to create a new (full) instance of the ZVol, and then manually create a new VM assigning the newly created ZVol.
  • 'Promote' will not fix anything here.

Links

Setting up a Virtual Machine (Worked Example / Virtualmin)

This is a worked example on how to setup a virtual machine using the wizard. with some of the settings explained where needed.

  • The wizard is very limited on the configuration of the ZVol and does not allow you to set the:
    • ZVol name
    • Logical/Physical block size
    • Compression type
  • ZVols created by the Wizard
    • have a random suffixed added to the end of the name you choose.
    • will be `Thick` Provisioned.
  • I would recommend creating the ZVol manually with your required settings but you can use the instructions below to get started.
  1. Operating System
    • Guest Operating System: Linux
    • Name: Virtualmin
    • Description: My Webserver
    • System Clock: Local
    • Boot Method: UEFI
    • Shutdown Timeout: 90
      • When you shutdown TrueNAS it will send a shutdown broadcast to all `Guest VM`.
      • This setting is the maximum time TrueNAS will wait for this 'Guest VM' to send `Shutdown Success` message afterwich TrueNAS will assume the VM is powered and will continue to shut itself (TrueNAS) down.
      • Longer might be required for more complicated VMs.
      • This allows TrueNAS to gracefully shutdown all of it's `Guest VMs`.
    • Start on Boot: Yes
    • Enable Display: Yes
      • This allows you to remotely see your display.
      • TrueNAS uses NoVNC (through the GUI) to see the VM's screen.
    • Display type: VNC
    • Bind: 0.0.0.0
      • Unless you have multiple adapters this will probably always be 0.0.0.0, but you can specify the ip. maybe look at this.
  2. CPUs and Memory
    • Virtual CPUs: 1
    • Cores: 2
    • Threads: 2
    • Optional: CPU Set (Examples: 0-3,8-11):
    • Pin vcpus: unticked
    • CPU Mode: Host Model
    • CPU Model: Empty
    • Memory Size (Examples: 500 KiB, 500M, 2 TB): 8GiB
    • Minimum Memory Size: Empty
    • Optional: NUMA nodeset (Example: 0-1): Empty
  3. Disks
    • Create new disk image: Yes
    • Select Disk Type: VirtIO
      • VirtIO requires extra drivers for Windows but is quicker.
    • Zvol Location: /Fast/Virtual_Disks
    • Size (Examples: 500 KiB, 500M, 2 TB): 50GiB
  4. Network Interface
    • Adapter Type: VirtIO
      • VirtIO requires extra drivers for Windows but is quicker.
    • Mac Address: As specified
    • Attach NIC: enp1s0
      • Might be different for yours such as eno1
    • Trust Guest filters: No
      • Trust Guest Filters | Documentation Hub
        • Default setting is not enabled. Set this attribute to allow the virtual server to change its MAC address. As a consequence, the virtual server can join multicast groups. The ability to join multicast groups is a prerequisite for the IPv6 Neighbor Discovery Protocol (NDP).
        • Setting Trust Guest Filters to “yes” has security risks, because it allows the virtual server to change its MAC address and so receive all frames delivered to this address.
  5. Installation Media
    • As required
  6. GPU
    • Hide from MSR: No
    • Ensure Display Device: Yes
    • GPU's:
  7. Confirm Options / VM Summary
    • Guest Operating System: Linux
    • Number of CPUs: 1
    • Number of Cores: 2
    • Number of Threads: 2
    • Memory: 3 GiB
    • Name: Virtualmin
    • CPU Mode: CUSTOM
    • Minimum Memory: 0
    • Installation Media: /mnt/MyPoolA/ISO/ubuntu-22.04.2-live-server-amd64.iso
    • CPU Model: null
    • Disk Size: 50 GiB
  8. Rename the ZVol (optional)
    • The ZVol created during the wizard will always have a random suffix added
      MyPoolA/Virtual_Disks/Virtualmin-ky3v69
    • You need to follow the instructions elsewhere in this tutorial to change the name but for the TLDR people:
      1. sudo zfs rename MyPoolA/Virtual_Disks/Virtualmin-ky3v69 MyPoolA/Virtual_Disks/Virtualmin
      2. Virtualization --> Virtualmin --> Devices --> Disk --> Edit --> ZVol: MyPoolA/Virtual_Disks/Virtualmin
  9. Change the VM block size to 4Kn/4096KB (optional)
    • The default block size for VMs created during the wizard is 512B, but for modern operating systems it is better to use 4Kn. ZFS default block size is 4Kn.
    • Virtualization --> Virtualmin --> Devices --> Disk --> Edit --> Disk Sector Size: 4096
  10. Correct the ZVol Metadata Sector Size (DO NOT do this, reference only)

    The following are true:

    • You have one setting for both the Logical and Physical block size.
    • volblocksize (ZVol)
      • The ZVol in it's meta information has a value for the blocksize and it is called volblocksize.
      • If a VM or an iSCSI is used, then this setting is ignored because they supply their own volblocksize parameter.
      • This value is only used if no block size is specified.
      • This value is written in to the metadata when the ZVol is created.
      • The default value is 16KB
      • 'volblocksize' is readonly
    • The block size configured in the VM is 512B.
    • check the block size
      sudo zfs get volblocksize MyPoolA/Virtual_Disks/Virtualmin

    This means:

    • volblocksize
      • A ZVol created during the VM wizard still has volblocksize=16KB but this is not the value used by the VM for it's block size.
      • I believe this setting is used by the ZFS filesystem and alters how it handles the data rather than how the block device is presented.
      • You cannot change this value after the ZVol is created.
      • It does not affect the blocksize that your VM or iSCSI will use.
    • When I manually create a ZVol
      • and I set the block size to 4KB, I get a warning: `Recommended block size based on pool topology: 16K. A smaller block size can reduce sequential I/O performance and space efficiency.`
      • The tooltip says: `The zvol default block size is automatically chosen based on the number of the disks in the pool for a general use case.`
    • When I edit the VM disk
      • Help: Disk Sector Size (tooltip): Select a sector size in bytes. Default leaves the sector size unset and uses the ZFS volume values. Setting a sector size changes both the logical and physical sector size.
      • I have the options of (Default|512|4096)
      • Default will be 512B as the VM is setting the blocksize and not the ZVol volblocksize.
  11. Change ZVol Compression (optional)
    • The compression can be setup by the folder hierarchy or specifically on the ZVol. I will show you how to change this option.
    • Datasets --> Mag --> Virtualmin (ZVol) --> ZVol Details --> Edit --> Compression level
  12. Add/Remove devices (optional)
    • The wizard is limited in what devices you can add but you can fix that now by manually adding or removing devices attached to your VM.
    • Virtualization --> Virtualmin --> Devices --> Add
  13. Install Ubunutu as per this article (ready for virtualmin)

 

Docker

All apps on TrueNAS are premade docker images but you can roll your own if you want.

  • Using Launch Docker Image | Documentation Hub
    • Provides information on using Launch Docker Image to configure custom or third-party applications in TrueNAS SCALE.
    • What is Docker? Docker is an open-source platform for developing, shipping, and running applications. Docker enables the separation of applications from infrastructure through OS-level virtualization to deliver software in containers.
    • What is Kubernetes? Kubernetes (K8s) is an open-source system for automating deployment, scaling, and managing containerized applications.
  • How to Use Docker on TrueNAS Scale (2023) - WunderTech - This step-by-step guide will show you how to use Docker on TrueNAS Scale. Docker on TrueNAS Scale will totally revolutionize your NAS!
    • While the applications shown above will allow you to easily create a Docker container using a preset configuration, you can technically create any Docker container you’d like. Since TrueNAS Scale is built on Debian-Linux unlike TrueNAS Core, Docker is supported out of the box.

Questions (to sort)

  • Backup these
    • SSH keys
    • credentials
    • system dataset, is this the same as backing up the config 'download file' thing
    • VM drives + VM settings (meta data)?
    • the whole NAS
    • quickest way to backup everything?
  • Backups Qs
    • where are the 3.45am configbackup option
    • When i create my first non `boot-pool` pool, does TrueNAS automatically move the system dataset? = yes
    • Do i need to manually backup the system dataset? how do i restore it?
  • mounts
    • The pool/vdev is mounted, is this the block device?
      • /mnt/Magnetic_Storage
      • Zvol and datasets are block level devices that present themselves under the pools mount point.
  • KVM
    • KVM: `Host model` vs `host passthrough` for CPU ?? seem to do the same
    • should i use compression on my virtual machines?
    • should i use compression on a production webserver ZVol?
    • should i use compression on virtual disks (zvol) (that i use for production/live/webserver)
  • Apps
    • are there any other app catalogs for truenas
  • Pulling disks
    • Should I put a drive offline before removing it?
  • vdev
  • UPS
    • will integration shut my dockers and VMs down gracefully
  • ZFS
    • Do i need to defrag ZFS
    • does this need to be done? is it automatic? is it pointless with NVMe and SSD
    • how do i mount and umount ZFS dataset filesystems
  • VM Block size issue
    • it should set the correct ZVol blocksize in the zvol meta during the VM wizard
    • you should be able to choose the block size during the wizard
  • ZVol
    • zvol vs RAW, which is better?
    • zfs equivalent of a fixed disk for ZVol
    • is there a fixed disk version of ZVol to allow for better performance, consider dynamic + static. is a reserverd disk actually reserves the space?
    • ask about 16K thing, should i set this to 4K to prevent penalty? is this the blocl size for ZFS or for the guest OS, get htis checked
  • BIOS
    • what is fast boot? do i need this on?
    • do i need fast boot on my truenas, still enabled, should i disable?
    • what is asus nvme native driver? do i need it?
  • RAM
    • what is nicking 2.5GB of my system RAM, is it my APU and can this be turned off. i reduced the UMA to 64MB
  • Shares
    • what is an app share, how does it differe from generic and SMB. is is SMB with specific stuff for apps? are apps not all supposed to be dockers, some do have the configs that can be stored elsewhere

 

 

 

Read 498 times Last modified on Monday, 29 April 2024 16:36