<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[The Generally Available Blog]]></title><description><![CDATA[A blog for programming, open source development, DevOps, and all things Tech.]]></description><link>https://blog.jenningsga.com/</link><generator>Ghost 5.82</generator><lastBuildDate>Thu, 09 Apr 2026 04:26:34 GMT</lastBuildDate><atom:link href="https://blog.jenningsga.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Upper Timing Cover Replacement]]></title><description><![CDATA[Do-it-yourself guide for replacement of the upper timing cover for the VW Mk7 Golf R and GTI.]]></description><link>https://blog.jenningsga.com/upper-timing-cover-replacement/</link><guid isPermaLink="false">688ff98c08ff4e0001f71b3a</guid><category><![CDATA[R]]></category><category><![CDATA[Golf]]></category><category><![CDATA[Mk7]]></category><category><![CDATA[DIY]]></category><category><![CDATA[MQB]]></category><dc:creator><![CDATA[Patrick Jennings]]></dc:creator><pubDate>Sun, 10 Aug 2025 00:04:11 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1730751668029-2a2e95690d62?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDI4fHxtazd8ZW58MHx8fHwxNzU0NzgzOTQwfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1730751668029-2a2e95690d62?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDI4fHxtazd8ZW58MHx8fHwxNzU0NzgzOTQwfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" alt="Upper Timing Cover Replacement"><p>The plastic timing cover on my 2016 Volkswagen Golf R was leaking. Well, not exactly leaking but a slow annoying weeping of motor oil around the cover that hides the intricate mechanical timing components of the 2 liter four-cylinder turbocharged engine.</p><p>I tried the usual steps to remedy this common problem area of the Volkswagen Mk7 (and prior) Golf platform: tightening the cover bolts and making sure my positive crankcase ventilation (PCV) system was functioning properly. Alas, oil would still make its way past the seals and throughout my engine bay.</p><p>After purchasing a new upper timing cover and seals and spending about 2 hours time on installation, I was able to resolve the issue. In this post I&apos;ll be going over everything you need and some helpful tricks in order to resolve this common issue.</p><h1 id="parts-required">Parts Required</h1><p>ShopDAP has a Upper Timing Cover Reseal kit that will provide all the hardware needed for this job. The total price of this kit is $99 plus tax.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://www.shopdap.com/upper-timing-reseal-kit-daprepair-06k-103-269-f-rein-dapr.html?ref=blog.jenningsga.com"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Upper Timing Cover Reseal Kit | Gen 3 2.0T / MK7 &amp; MK7.5</div><div class="kg-bookmark-description">Fix the Oil Leak Coming from the Upper Timing Cover on your 2.0t Engine</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://www.shopdap.com/pub/media/favicon/stores/1/shopdap-favcon_1.png" alt="Upper Timing Cover Replacement"><span class="kg-bookmark-author">Manufacturer:</span><span class="kg-bookmark-publisher">Sign In</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://www.shopdap.com/pub/media/catalog/product/cache/648aea516ef657c2711165e03a32e86b/0/6/06k-198-269-f-rein-dapr_.jpg" alt="Upper Timing Cover Replacement"></div></a></figure><p>The individual part components are as follows:</p><table>
<thead>
<tr>
<th>Part</th>
<th>SKU</th>
<th>Quantity</th>
</tr>
</thead>
<tbody>
<tr>
<td>M6x14 Torx Screw</td>
<td>N10751201</td>
<td>6</td>
</tr>
<tr>
<td>Seal for Cam Magnet</td>
<td>WHT007212B</td>
<td>2</td>
</tr>
<tr>
<td>Upper Timing Cover</td>
<td>TCV0137</td>
<td>1</td>
</tr>
</tbody>
</table>
<p>A new upper timing cover will come with the seals already affixed, which is convenient. Do make sure the seals are seated properly on the cover before installing on your vehicle. You may or may not be able to get the seals themselves without the plastic cover. I did not go this route but it may be viable as long as your cover has not warped.</p><p>The two seals for the camshaft magnets are optional, but are a good idea to do while you have the upper timing cover off and have access to them.</p><p>Technically, the torx screws are one time use and it is advised to get new ones.</p><h1 id="tools-used">Tools Used</h1><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.jenningsga.com/content/images/2025/08/tools.jpg" class="kg-image" alt="Upper Timing Cover Replacement" loading="lazy" width="1063" height="867" srcset="https://blog.jenningsga.com/content/images/size/w600/2025/08/tools.jpg 600w, https://blog.jenningsga.com/content/images/size/w1000/2025/08/tools.jpg 1000w, https://blog.jenningsga.com/content/images/2025/08/tools.jpg 1063w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Assortment of Tools</span></figcaption></figure><p>The <a href="https://www.harborfreight.com/locking-flex-head-ratchet-and-bit-set-35-piece-58074.html?ref=blog.jenningsga.com">Harbor Freight ICON&#xA0;Locking Flex-Head Ratchet and Bit Set</a> is key to a successful DIY job. The torx bits on the upper timing cover and two cam magnets are in a tight position as they are very close to the engine mount. This ratchet with its pass-through bit design allows access to these areas without needing to remove the engine mount (which would  also require properly supporting the engine).</p><p>Another tool that is helpful is a thin 10mm open-ended wrench. I am a fan of this <a href="https://www.amazon.com/dp/B078JDRMH2?ref_=ppx_hzsearch_conn_dt_b_fed_asin_title_3&amp;th=1&amp;ref=blog.jenningsga.com">super thin set of wrenches by Capri Tools</a> that comes in 7 different sizes. During my research, some individuals had found success with a long ratching box end wrench. I would advice against this. The two bolts on the bottom come very close to the engine mount as they are backed out. You may get stuck with a box end wrench due to the limited clearance there.</p><p>Other tools include: a pick set to remove the cam seals, a set of pliers to remove a coolant hose coming from the expansion tank, and trim tools to pry the cam magnets from the upper timing cover.</p><h1 id="upper-timing-cover-removal-and-install">Upper Timing Cover Removal and Install</h1><figure class="kg-card kg-image-card"><img src="https://blog.jenningsga.com/content/images/2025/08/1000010670.jpg" class="kg-image" alt="Upper Timing Cover Replacement" loading="lazy" width="1209" height="1209" srcset="https://blog.jenningsga.com/content/images/size/w600/2025/08/1000010670.jpg 600w, https://blog.jenningsga.com/content/images/size/w1000/2025/08/1000010670.jpg 1000w, https://blog.jenningsga.com/content/images/2025/08/1000010670.jpg 1209w" sizes="(min-width: 720px) 720px"></figure><p>Slide the hose off the metal barb shown in the above picture. You will also need to unbolt the metal barb to give clearance during removal of the cover.</p><p>To remove the connectors on the two cam magnets, take a pick and push the clip up. Then press the clip down and pull the connector out.</p><figure class="kg-card kg-image-card"><img src="https://blog.jenningsga.com/content/images/2025/08/1000010672.jpg" class="kg-image" alt="Upper Timing Cover Replacement" loading="lazy" width="1209" height="1209" srcset="https://blog.jenningsga.com/content/images/size/w600/2025/08/1000010672.jpg 600w, https://blog.jenningsga.com/content/images/size/w1000/2025/08/1000010672.jpg 1000w, https://blog.jenningsga.com/content/images/2025/08/1000010672.jpg 1209w" sizes="(min-width: 720px) 720px"></figure><p>Next you will need to unbolt the oil dipstick tube to give you more space while removing the cover. You do not need to remove the dipstick itself. You can see the bolt in the above picture.</p><figure class="kg-card kg-image-card"><img src="https://blog.jenningsga.com/content/images/2025/08/1000010671.jpg" class="kg-image" alt="Upper Timing Cover Replacement" loading="lazy" width="1209" height="1209" srcset="https://blog.jenningsga.com/content/images/size/w600/2025/08/1000010671.jpg 600w, https://blog.jenningsga.com/content/images/size/w1000/2025/08/1000010671.jpg 1000w, https://blog.jenningsga.com/content/images/2025/08/1000010671.jpg 1209w" sizes="(min-width: 720px) 720px"></figure><p>There is another plastic clip the dipstick tube is connected to outlined in the picture above. With this, the dipstick tube should be able to be moved out of the way.</p><p>Now you can remove the six bolts (three each) holding the camshaft magnets. Carefully pry the magnets out. They like to pop out and drop down the engine bay.</p><p>This will give you the most room for removing all of the bolts for the upper timing cover. The bottom two bolts are the worst to get to. Use the thin open ended wrench to work the two bolts lose. I could not get my fingers on the bolts to spin the bolts and had to slowly wrench them out. Unfortunately, the Harbor Freight flex-head ratchet will not fit in the bottom space, but does make easy work of the other four bolts that hold the cover on. The bolts are locked onto the cover itself and will not need to be removed completely to remove the cover from the engine block.</p><p>With the bolts backed out as far as they will go, carefully remove the cover. This is easier said than done and will require a bit of finesse.</p><p>With the cover removed (sorry no pictures of that), be sure to clean the mounting surfaces where the seals rest with a shop towel. Now is the perfect time to replace the two camshaft magnet seals with a pick. Remember to move over your oil cap to the new cover as well.</p><p>Installation is the reverse. The bolts like to get in the way while you work the cover into place. Do not try to force anything too hard and be careful of the seals being moved out of place.</p><p>Hand tighten and then torque the six cover bolts in a star pattern. The final torque specs for these bolts is 9Nm. Clean and reinstall the two cam magnets and replace the six aluminum torx bolts. These torque to yield bolts must be torqued to 4Nm then turned an additional 45 degrees.</p><p>Replace the bolt for the oil dipstick tube, replace the coolant hose, and you are all done!</p>]]></content:encoded></item><item><title><![CDATA[The Internet Protocol v6 - The Digital Messiah]]></title><description><![CDATA[A deep dive into Internet Protocol v6 (IPv6). IPv6 will expand the number of addresses available on the Internet and provide improvements to its architecture, performance, and reliability and provide further innovation for our connected devices.]]></description><link>https://blog.jenningsga.com/internet-protocol-v6-digital-messiah/</link><guid isPermaLink="false">664756537d9c5200018a4e76</guid><category><![CDATA[Networking]]></category><category><![CDATA[IPv6]]></category><category><![CDATA[Internet]]></category><dc:creator><![CDATA[Patrick Jennings]]></dc:creator><pubDate>Sat, 22 Jun 2024 21:36:40 GMT</pubDate><media:content url="https://blog.jenningsga.com/content/images/2024/05/ipv6-digital-messiah.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.jenningsga.com/content/images/2024/05/ipv6-digital-messiah.png" alt="The Internet Protocol v6 - The Digital Messiah"><p>The Internet has a problem, with an ever-increasing amount of devices persistently connected, we are running out of addresses to provide through the originally developed and widely used Internet Protocol v4 (IPv4).</p><p>This has been anticipated since the late 1980s and all of the top-level address spaces have been <a href="https://www.nro.net/ipv4-free-pool-depleted?ref=blog.jenningsga.com">allocated as of 2011</a>. Do not fret, as there is an increasingly popular standard on the horizon, the Internet Protocol v6 (IPv6), which will vastly expand the number of addresses available. In addition, IPv6 provides some fundamental improvements to the outdated IPv4 architecture, helping to improve the performance and reliability of the routing of data to our precious connected devices.</p><p>The king is dead, long live the new digital messiah!</p><h1 id="addressing-architecture">Addressing Architecture</h1><p>One of the limiting factors of IPv4 is the number of unique addresses that may be allocated to connected devices. IPv4 has a 32-bit address space, allowing the enumeration of around 4.4 billion addresses to be allocated globally.</p><p>There are <a href="https://en.wikipedia.org/wiki/IPv4?ref=blog.jenningsga.com#Special-use_addresses">restricted blocks of addresses</a>, used for special purposes such as for private networks, further dwindling the number of addresses that may be used for devices connected to the internet.</p><p>In contrast, IPv6 uses a 128-bit address space, allowing for a staggering 340 undecillion (trillion trillion trillion) unique addresses. It is hard to conceptualize the vastness of this span of numbers, but this quote from an <a href="https://money.cnn.com/2012/06/06/technology/ipv6/index.htm?ref=blog.jenningsga.com">article by CNN</a> may help put it into perspective:</p><blockquote>
<p>With IPv6, there are now enough IP combinations for everyone in the world to have a billion billion IP addresses for every second of their life.</p>
</blockquote>
<h1 id="packet-headers">Packet Headers</h1><p>With a quadruple increase in address size, one might think that <a href="https://datatracker.ietf.org/doc/html/rfc2460?ref=blog.jenningsga.com">IPv6 packet headers</a>, which specify the addresses the data is to be transmitted to and from, may be at least four times as large. In fact, the header size is only twice as large as seen with IPv4 packets.</p><p>This is due to changes to the structure of IPv6 packets by removing no longer needed components found in IPv4. These changes have made for more performant packet forwarding by creating as lean of a packet size as possible while supporting the necessary growth of the Internet. Meta <a href="https://engineering.fb.com/2015/09/14/networking-traffic/ipv6-it-s-time-to-get-on-board/?ref=blog.jenningsga.com">reported seeing 10-15% faster page loads</a> due to their transition to IPv6-only deployments in 2015.</p><h3 id="checksum-field-has-been-dropped">Checksum Field Has Been Dropped</h3><p>The IPv4 checksum field, which was used to verify that the datagram contained within a packet had not been corrupted during transit, has been dropped.</p><p>The reasoning behind this change is that devices already perform this action by validating the integrity of the transmitted data at the data link layer (Layer 2). Layer 4 protocols, such as TCP and UDP, are also encouraged to perform redundancy checks. This is done as a part of the default TCP operation. For UDP, checksum validation is optional in IPv4, yet is required to be used under IPv6.</p><h3 id="packet-fragmentation-has-been-dropped">Packet Fragmentation Has Been Dropped</h3><p>IPv4 handles cases where the Maximum Transmission Unit (MTU), defining the maximum size of a packet, is smaller than the transmitted packet size, by fragmenting a packet along its destination into smaller units. IPv6 does not support fragmentation past the originating source. Routers will discard packets larger than the MTU defined on the interface. Instead, the source of the packet is responsible for fragmenting it using the appropriate size.</p><h1 id="address-types">Address Types</h1><pre><code>Address type         Binary prefix        IPv6 notation
------------         -------------        -------------
Unspecified          00...0  (128 bits)   ::/128
Loopback             00...1  (128 bits)   ::1/128
Multicast            11111111             FF00::/8
Link-Local Unicast   1111111010           FE80::/10
Global Unicast       (everything else)</code></pre><p>An IPv6 address is represented as eight groups of four hexadecimal digits, separated by colons. This can be shortened by using a double colon, <code>::</code>, to represent consecutive zeros.</p><p>There are several important types of addresses defined by the <a href="https://datatracker.ietf.org/doc/html/rfc4291?ref=blog.jenningsga.com">IPv6 addressing architecture standards</a> that we will learn about below.</p><h3 id="unicast-addresses">Unicast Addresses</h3><p>A unicast address is an identifier for a single interface. A packet sent to a unicast address is delivered to the interface identified by that address.</p><p>Link-local unicast addresses are special uniquely generated addresses defined and used locally to a particular network segment. Every interface supporting IPv6 is required to have a link-local address. These addresses are not guaranteed to be unique and are not meant to be routed across network boundaries.</p><p>Global unicast addresses (GUA) are globally unique and externally routable addresses on the IPv6 Internet. As defined by <a href="https://www.rfc-editor.org/rfc/rfc4291.html?ref=blog.jenningsga.com">RFC4291</a>, GUAs may be any address excluding the range of <code>FF00::/8</code>. The Internet Assigned Numbers Authority (IANA) has <a href="https://www.iana.org/assignments/ipv6-address-space/ipv6-address-space.xhtml?ref=blog.jenningsga.com">allocated the space</a> <code>2000::/3</code> for GUAs.</p><h3 id="anycast-addresses">Anycast Addresses</h3><p>An anycast address is an identifier for a set of interfaces, belonging to different nodes. A packet sent to an anycast address is delivered to one of the interfaces identified by that address. The interface chosen is considered the nearest, according to the routing protocol&apos;s distance measurement.</p><p>Use cases for anycast addresses include load balancing and efficiently distributing data among the nodes in a network. As an example, if a router fails, IPv6 nodes may establish a connection to a fail-over router, using a defined anycast address, and maintain the availability of the network.</p><p>Anycast addresses are indistinguishable from unicast addresses.</p><h3 id="multicast-addresses">Multicast Addresses</h3><p>A multicast address is an identifier for a set of interfaces typically belonging to different nodes. A packet sent to a multicast address is delivered to all interfaces identified by that address. Multicast addresses are present in the address space: <code>FF00::/8</code>.</p><p>There are no longer IPv4 <a href="https://datatracker.ietf.org/doc/html/rfc919?ref=blog.jenningsga.com">broadcast addresses</a> in IPv6. A broadcast address is used to transmit network information, such as DCHP and ARP, to all hosts on a network segment. IPv6 subnetworks are designed to be substantially larger and broadcast traffic becomes in-efficient at this scale. Their function has been replaced by multicast addresses.</p><h4 id="well-known-multicast-addresses">Well-Known Multicast Addresses</h4><p>There exists an address registry, managed by IANA, covering well-defined <a href="https://www.iana.org/assignments/ipv6-multicast-addresses/ipv6-multicast-addresses.xhtml?ref=blog.jenningsga.com">multicast addresses</a> for standardized topics such as:</p><ul><li>The all routers group (<code>FF01::2</code>)</li><li>The all nodes group (<code>FF01::1</code>)</li><li>The <a href="https://www.rfc-editor.org/rfc/rfc6762.html?ref=blog.jenningsga.com">Multicast DNS (mDNSv6)</a> group (<code>FF01::FB</code>)</li></ul><h4 id="multicast-listener-discover">Multicast Listener Discover</h4><p><a href="https://datatracker.ietf.org/doc/html/rfc2710?ref=blog.jenningsga.com">Multicast Listener Discover</a> (MLD) is used to join and listen to multicast traffic. Devices send MLD reports on groups that they would like to join. Routers receive the reports and build multicast tables associated with the addresses. When the router receives multicast traffic, it is responsible for forwarding to the individual interfaces.</p><h2 id="threada-use-case-for-ipv6-address-types">Thread - A Use Case for IPv6 Address Types</h2><p>We are seeing applications and standards utilizing these different types of addresses for optimizing the flow of traffic. For example, <a href="https://openthread.io/guides/thread-primer?ref=blog.jenningsga.com">Thread</a>, an open IPv6-based networking protocol for Internet of Things (IoT) devices, uses these address types for the operational communication of connected smart devices.</p><figure class="kg-card kg-image-card"><a href="https://openthread.io/guides/images/ot-primer-scopes.png?ref=blog.jenningsga.com"><img src="https://openthread.io/guides/images/ot-primer-scopes.png" class="kg-image" alt="The Internet Protocol v6 - The Digital Messiah" loading="lazy" width="1354" height="1338"></a></figure><ul><li><a href="https://openthread.io/guides/thread-primer/ipv6-addressing?ref=blog.jenningsga.com#multicast">Multicast messages</a> define and allow control of connected groups of nodes and allow devices to announce themselves on the network for service discovery.</li><li><a href="https://openthread.io/guides/thread-primer/ipv6-addressing?ref=blog.jenningsga.com#anycast">Anycast messages</a> determine the closest router to a particular node in a Thread mesh network. In addition, anycast addresses allow fault tolerance. If a router becomes unavailable, a new router is established and connectivity is maintained.</li></ul><h1 id="subnetworks-in-ipv6">Subnetworks in IPv6</h1><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://blog.jenningsga.com/content/images/2024/06/Address-space.svg" class="kg-image" alt="The Internet Protocol v6 - The Digital Messiah" loading="lazy" width="1039" height="254"><figcaption><span style="white-space: pre-wrap;">Example Subnetwork Addresses in IPv4 and IPv6</span></figcaption></figure><p>The recommended size of an IPv6 subnetwork is 64 bits. This allows for 2<sup>64</sup> individually addressable hosts within the subnet. This is substantially larger than IPv4 and in fact, a single IPv6 subnetwork can contain the entire address space of IPv4.</p><p>The vastness and consistent size of IPv6 subnetworks have many benefits including:</p><ul><li>They support the future growth of IoT devices by significantly increasing the number of local connections available.</li><li>They make it easier for network engineers to manage and move subnetworks.<ul><li>There will be a less likely chance of host address collisions when connecting large organizational networks.</li></ul></li><li>Being less feasible for attackers to use network probing techniques that sequentially scan to find open services for attack, as talked about in <a href="https://datatracker.ietf.org/doc/html/rfc5157?ref=blog.jenningsga.com">RFC 5157</a>.</li></ul><h2 id="network-prefix-sizes">Network Prefix Sizes</h2><p>A &quot;Prefix&quot; is a defined range of blocks in the IPv6 address space beginning with the left-most bits. As an example, <code>2001:0db8:1234::/48</code> indicates that the first 48 bits, <code>2001:0db8:1234</code>, are for the global routing prefix. The remaining 80 bits can be divided by the assigned part to delegate to routers for subnetworks and host addressing.</p><p>Internet Service Providers (ISPs) are <a href="https://www.ripe.net/publications/docs/ripe-690/?ref=blog.jenningsga.com#4-2-2---48-for-business-customers-and--56-for-residential-customers" rel="noreferrer">recommended to provide</a> an IPv6 prefix of at least /56 to homeowners and a /48 for organizations. Given these prefix sizes and the recommended IPv6 subnetwork size of 64 bits, we can calculate the total subnets available to each.</p><ul><li>A homeowner receiving a /56 prefix could create 2<sup>8</sup> individual subnets.</li><li>An organization receiving a /48 prefix could create 2<sup>16</sup> individual subnets.</li></ul><h2 id="64-bit-extended-unique-identifier-eui-64">64-bit Extended Unique Identifier (EUI-64)</h2><p>The interface identifier (IID) is the grouping of the right-most 64 bits of an IPv6 address. It is not meant to be contiguous but instead, each client generates their unique identifier.</p><p>You may be aware of <a href="https://en.wikipedia.org/wiki/MAC_address?ref=blog.jenningsga.com" rel="noreferrer">IEEE 802 MAC addresses</a>, which are globally unique identifiers assigned to all commercial networking hardware. These identifiers have been used since the early adoption of Ethernet and standardized as part of <a href="https://en.wikipedia.org/wiki/IEEE_802.3?ref=blog.jenningsga.com">IEEE 802.3</a>. While these 48-bit addresses allow for a large number of unique identifiers, the standard is projected to have an <a href="https://standards.ieee.org/wp-content/uploads/import/documents/tutorials/eui.pdf?ref=blog.jenningsga.com">end-of-life by 2080</a>.</p><p><a href="https://datatracker.ietf.org/doc/html/rfc4291?ref=blog.jenningsga.com#section-2.5.1">IPv6 has adopted</a> the EUI-64 standard, which provides an increased 64-bit capacity and is derived from a MAC address using a <a href="https://datatracker.ietf.org/doc/html/rfc4291?ref=blog.jenningsga.com#page-20">straightforward process</a>. We will look at alternatives to generating 64-bit IPv6 interface identifiers to preserve privacy in the sections below.</p><h1 id="dynamic-host-configuration-protocol-dhcp-version-6">Dynamic Host Configuration Protocol (DHCP) Version 6</h1><p>The fundamentals around DHCPv6 are different when compared with its DHCPv4 predecessor. The biggest change is that a DHCPv6 service is no longer required to be run on each network segment to coordinate host addressing. A DHCPv4 service was required to be present and be the source of truth for host address reservations in IPv4. On the other hand with IPv6, clients can generate addresses from self-assigned identifiers using what is known as Stateless Address Autoconfiguration (SLAAC).</p><p>The primary responsibility of DHCPv6 is to provide a mechanism for delegating IPv6 prefixes to other routers, as defined by <a href="https://datatracker.ietf.org/doc/html/rfc3633?ref=blog.jenningsga.com">DHCPv6 Prefix Delegation</a> (DHCPv6-PD). A requesting router acts as a DHCP client and requests a prefix to be assigned by a delegating router, acting as a DHCP server.</p><p>As part of recommendations for DHCP-PD, delegated prefixes should have an indefinite lifespan. It can be costly to <a href="https://datatracker.ietf.org/doc/html/rfc4076?ref=blog.jenningsga.com" rel="noreferrer">renumber an entire site</a> when provided a new prefix, as each router must reconfigure their DHCP prefix settings and hosts must reconfigure their addresses. ISPs are encouraged to assign customers <a href="https://www.ripe.net/publications/docs/ripe-690/?ref=blog.jenningsga.com#5--end-user-ipv6-prefix-assignment--persistent-vs-non-persistent" rel="noreferrer">persistent prefix assignments</a> which would not require renumbering.</p><h2 id="stateless-address-autoconfiguration-slaac">Stateless Address Autoconfiguration (SLAAC)</h2><p><a href="https://datatracker.ietf.org/doc/html/rfc4862?ref=blog.jenningsga.com">SLAAC</a> is often used as a means for obtaining IPv6 addresses, especially for link-local. The following steps are executed by a device to set up an IPv6 stack using SLAAC:</p><ol><li>The device generates an EUI-64 and with it, a link-local address.</li><li>The device executes <a href="https://datatracker.ietf.org/doc/html/rfc4862?ref=blog.jenningsga.com#section-5.4">Duplicate Address Detection</a> on the network segment. If a duplicate address exists, the device will restart the process.</li><li>A Router Solicitation (RS) is sent and a Router Advertisement (RA) is received to obtain the router prefix information.</li><li>The device creates a Global Unicast Address (GUA) using the prefix provided by the router by executing steps 1 and 2 with this new address.</li></ol><p>At this point, the device can communicate with neighbors on the network segment and globally through the IPv6 router advertised.</p><p>SLAAC allows for simplified network configuration as stateful services, like DHCP, are no longer necessary to run an operable network. Devices can automatically configure their addresses and communication can take place throughout the networking segment.</p><h3 id="advertising-dns-configuration">Advertising DNS Configuration</h3><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://datatracker.ietf.org/doc/html/rfc8106?ref=blog.jenningsga.com"><div class="kg-bookmark-content"><div class="kg-bookmark-title">RFC 8106: IPv6 Router Advertisement Options for DNS Configuration</div><div class="kg-bookmark-description">This document specifies IPv6 Router Advertisement (RA) options (called &#x201C;DNS RA options&#x201D;) to allow IPv6 routers to advertise a list of DNS Recursive Server Addresses and a DNS Search List to IPv6 hosts. This document, which obsoletes RFC 6106, defines a higher default value of the lifetime of the DNS RA options to reduce the likelihood of expiry of the options on links with a relatively high rate of packet loss.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://static.ietf.org/dt/12.15.0/ietf/images/ietf-logo-nor-180.png" alt="The Internet Protocol v6 - The Digital Messiah"><span class="kg-bookmark-author">IETF Datatracker</span><span class="kg-bookmark-publisher">Jaehoon Paul Jeong</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://static.ietf.org/dt/12.15.0/ietf/images/ietf-logo-card.png" alt="The Internet Protocol v6 - The Digital Messiah"></div></a></figure><p>To support IPv6 SLAAC deployments without requiring a DHCPv6 server running on each subnetwork, a mechanism was introduced to provide DNS configuration as part of the Router Advertisement (RA). Domain information, such as RDNSS and DNSSL, may be included in the RA messages to inform hosts of this critical domain information.</p><h1 id="end-to-end-routing">End-to-End Routing</h1><p>Due to the anticipated depletion of IPv4 addresses, <a href="https://datatracker.ietf.org/doc/html/rfc1631?ref=blog.jenningsga.com">Network address translation</a> (NAT) was introduced. This routing technique allows a single public IPv4 address to map to multiple local devices on a private network. When a local device connects to an external resource, a router must keep track of the connection state and context, and translate incoming and outgoing communication between the devices.</p><p>With IPv6, NAT is no longer required. Every single connected device can receive a unique and publicly accessible IPv6 address. Thus connections are truly end-to-end and no translation by the router is needed. In its purest form, every host can directly communicate with another host through its IPv6 address, barring any firewall restrictions between the two hosts.</p><h2 id="neighbor-discovery-protocol-ndp">Neighbor Discovery Protocol (NDP)</h2><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.jenningsga.com/content/images/2024/06/Router-Solicitation-3.svg" class="kg-image" alt="The Internet Protocol v6 - The Digital Messiah" loading="lazy" width="584" height="278"><figcaption><span style="white-space: pre-wrap;">Example of Router and Neighbor Solicitation</span></figcaption></figure><p>The <a href="https://datatracker.ietf.org/doc/rfc4861/?ref=blog.jenningsga.com">Neighbor Discovery Protocol</a> (NDP) is fundamental in IPv6 routing and discovery processes. It is used to perform several important tasks such as:</p><ul><li><strong>Router discovery</strong> to solicit and advertise routers on a network segment.</li><li><strong>Address resolution</strong> to locate neighboring hosts on a network segment, for Duplicate Address Detection and IP to MAC address translations.</li></ul><p>Similar to IPv4 ARP, NDP is used to translate IP addresses with some added benefits that can prevent the security shortcomings of ARP. For example, <a href="https://en.wikipedia.org/wiki/ARP_spoofing?ref=blog.jenningsga.com">ARP spoofing</a> can be mitigated by what is called <a href="https://datatracker.ietf.org/doc/html/rfc3971?ref=blog.jenningsga.com">Secure Neighbor Discovery</a> (SEND).</p><h1 id="ipv6-security-guidelines">IPv6 Security Guidelines</h1><p>Security was a fundamental design consideration of IPv6. In fact, IPSec is considered a <a href="https://www.rfc-editor.org/rfc/rfc6434.txt?ref=blog.jenningsga.com">node requirement for all IPv6 implementations</a>, though it must be enabled on both device and application to use.</p><p>Some additional security considerations should be noted and may require extra configuration on network and end devices which we will go over below.</p><h2 id="a-firewall-is-required">A Firewall is Required</h2><p>Without Network Address Translation (NAT), IPv6 hosts are directly addressable to hosts outside the network. This fact makes packet switching and routing much simpler and as the Internet had originally been designed for, by way of the <a href="https://en.wikipedia.org/wiki/End-to-end_principle?ref=blog.jenningsga.com">end-to-end principle</a>.</p><p>That being said, this may cause unintended side effects as hosts on a network designed in a way that assumes hosts would not be publicly addressable may find their hosts exposed directly to the Internet, bad actors, and more when switching to IPv6.</p><p>For these reasons, a firewall must be enabled on an IPv6-enabled router and should be configured in such a way that blocks unintended incoming traffic to its hosts.</p><h2 id="increased-security-through-privacy-addresses">Increased Security Through Privacy Addresses</h2><p>The IPv6 addressing architecture defined that the rightmost 64 bits, associated with the interface identifiers (IID), be derived using an interface&apos;s MAC address, defined as modified EUI-64 format. This poses security concerns as a MAC address is globally unique and could be used to track an individual. Another security concern is that this could allow an attacker to find vulnerable targets, as MAC addresses can identify specific types of manufactured hardware.</p><p>One simple way around this is to generate a random MAC address. The Android operating system implements <a href="https://source.android.com/docs/core/connect/wifi-mac-randomization?ref=blog.jenningsga.com">MAC randomization</a> for each network it connects to. The randomized MAC address can be used to generate an EUI-64 without privacy concerns.</p><p>Another way to enhance device privacy is to use <a href="https://datatracker.ietf.org/doc/html/rfc4941?ref=blog.jenningsga.com">Privacy Extensions for SLAAC</a>. Privacy extensions define a mechanism for devices to create temporary addresses using randomized interface identifiers. These addresses are only used for outgoing connections and have a limited lifetime and are rotated periodically, usually 24 hours.</p><p>The <a href="https://datatracker.ietf.org/doc/html/rfc8064?ref=blog.jenningsga.com">currently recommended</a> way of creating <u>stable</u> IPv6 interface identifiers, while preserving the privacy of devices, is to use Stable Privacy Addresses defined in <a href="https://datatracker.ietf.org/doc/html/rfc7217?ref=blog.jenningsga.com">RFC 7217</a>. These addresses are created from a hashing function using inputs such as a device generated secret key and the network prefix. This type of stable address remains the same unless it moves to a new network or the network prefix changes.</p><p>With all of these different methods of creating the 64-bit interface identifier, the bits representing the identifier lose meaning and so it is recommended that the identifier <a href="https://www.rfc-editor.org/rfc/rfc7136?ref=blog.jenningsga.com">be considered opaque</a>.</p><h3 id="unauthorized-link-local-addressing">Unauthorized Link-local Addressing</h3><p>IPv6 traffic may be happening between hosts through link-local addressing even if a network admin has not explicitly set up IPv6 on their network. IPv6 and link-local addresses are auto-configured by the hosts themselves.</p><p>One consequence of this is that there may be unintended communication between hosts on a network segment if they are IPv6 enabled. IPv6 and IPv4 addresses should always be considered when defining firewall rules and network Access Control Lists (ACLs).</p><h3 id="vpn-leakage-for-dual-stack-environments">VPN Leakage for Dual-Stack Environments</h3><p>A client that supports both IPv4 and IPv6 and relies on VPN connectivity to maintain anonymity, should always make sure that IPv6 is enabled on the VPN client. If the VPN tunnel lacks support for IPv6, the client may inadvertently leak IPv6 traffic outside of the tunnel breaking the user&apos;s anonymity.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://datatracker.ietf.org/doc/html/draft-gont-opsec-vpn-leakages-00?ref=blog.jenningsga.com"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Virtual Private Network (VPN) traffic leakages in dual-stack hosts/ networks</div><div class="kg-bookmark-description">The subtle way in which the IPv6 and IPv4 protocols co-exist in typical networks, together with the lack of proper IPv6 support in popular Virtual Private Network (VPN) products, may inadvertently result in VPN traffic leaks. That is, traffic meant to be transferred over a VPN connection may leak out of such connection and be transferred in the clear on the local network. This document discusses some scenarios in which such VPN leakages may occur, either as a side effect of enabling IPv6 on a local network, or as a result of a deliberate attack from a local attacker. Additionally, it discusses possible mitigations for the aforementioned issue.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://static.ietf.org/dt/12.11.0/ietf/images/ietf-logo-nor-180.png" alt="The Internet Protocol v6 - The Digital Messiah"><span class="kg-bookmark-author">IETF Datatracker</span><span class="kg-bookmark-publisher">Fernando Gont</span></div></div></a></figure><p>If the network the client is connected to does not support IPv6, one should still enable IPv6 on the VPN client and <a href="https://en.wikipedia.org/wiki/Black_hole_(networking)?ref=blog.jenningsga.com">black hole</a> all IPv6 traffic. This is done by <a href="https://protonvpn.com/support/prevent-ipv6-vpn-leaks/?ref=blog.jenningsga.com">ProtonVPN as a preventative measure</a> against leakage.</p><h3 id="unique-local-addressing-ula-and-nat">Unique Local Addressing (ULA) and NAT</h3><p>Network admins may consider, when transitioning from IPv4 to IPv6, keeping the fundamentals of NAT in place within their private networks. <a href="https://datatracker.ietf.org/doc/html/rfc4193?ref=blog.jenningsga.com" rel="noreferrer">Unique Local Addresses</a> (ULA) were defined in IPv6 standards as a block of addresses accessible site-wide and not meant to be routed to the global address space. This would allow <a href="https://www.ietf.org/archive/id/draft-mrw-nat66-00.html?ref=blog.jenningsga.com">NAT66</a> to map a Global Unicast Address to an internal ULA address.</p><p>As explained in <a href="https://blogs.infoblox.com/ipv6-coe/ula-is-broken-in-dual-stack-networks/?ref=blog.jenningsga.com">this Infoblox article</a>, usage of ULA is fundamentally flawed within a dual-stack environment since IPv4 will always take precedence over ULA addresses. So the IPv6 stack would never be utilized over the IPv4 counterpart.</p><p>In conclusion, it is advised to design an IPv6 network around Global Unicast Addresses and not continue with the necessary designs and mindsets commonly used due to the limiting factors of IPv4.</p><h2 id="rogue-ipv6-router-advertisements">Rogue IPv6 Router Advertisements</h2><p>Routers on an IPv6 network advertise information to nodes that enable them to auto-configure and connect to their network. <a href="https://datatracker.ietf.org/doc/html/rfc6104?ref=blog.jenningsga.com">Rogue IPv6 router advertisements</a> may be sent to nodes, whether maliciously or by misconfiguration, which could lead to Man-in-the-Middle attacks or general network routing issues.</p><p>A similar situation can happen in IPv4, where rogue DHCP servers may be on the network. <a href="https://packetpushers.net/blog/five-things-to-know-about-dhcp-snooping/?ref=blog.jenningsga.com">DHCP snooping</a> can be deployed on layer 2 switches to help combat and secure networks for this IPv4 scenario. An equivalent mechanism, called <a href="https://datatracker.ietf.org/doc/html/rfc7113?ref=blog.jenningsga.com">RA-Guard</a>, can be used for IPv6-enabled routers on layer 2 switches to help detect and mitigate rogue routers on the network.</p><h2 id="neighbor-discovery-protocol-ndp-attacks">Neighbor Discovery Protocol (NDP) Attacks</h2><p>Duplicate Address Detection (DAD) is integral to the functioning of SLAAC. DAD <a href="https://datatracker.ietf.org/doc/html/rfc6583?ref=blog.jenningsga.com">may be exploited</a> by malicious actors for denial-of-service attacks on a network segment. When a node queries with a Neighbor Solicitation to determine whether an address is being used, the malicious attacker may repeatedly respond causing the node to continually reject addresses and be unable to connect to the network. Similarly, an attacker may try to flood routers with NDP queries, causing legitimate traffic to be disregarded or the router&apos;s Neighbor Cache to be overloaded.</p><p><a href="https://datatracker.ietf.org/doc/html/rfc3971?ref=blog.jenningsga.com#section-9.2.3" rel="noreferrer">Secure Neighbor Discovery (SEND)</a> can be used to mitigate these types of attacks. SEND must be enabled on each device, which includes steps like creating public keys and associating Cryptographically<a href="https://datatracker.ietf.org/doc/html/rfc3972?ref=blog.jenningsga.com"> Generated Addresses</a> (CGAs). I have not found any operating systems that support this out of the box and only a few third-party concepts are available for use at the time of writing. Bootstrapping each device to mitigate these types of attacks would be very difficult as well.</p><p>Alternatively, routers may rate limit the amount of NDP queries that can be sent over a particular time frame.</p><p>It should be noted that IPv4 has the same types of attack vectors through ARP. This <a href="https://blogs.infoblox.com/ipv6-coe/holding-ipv6-neighbor-discovery-to-a-higher-standard-of-security/?ref=blog.jenningsga.com">Infoblox blog article</a> goes over steps an admin can do to secure both protocols but concludes that this is an avenue that requires more thought and consideration to secure.</p><h1 id="the-adoption-of-ipv6">The Adoption of IPv6</h1><p>We have seen that there are benefits to switching to IPv6, such as the increase in address space, simplified networking configuration by not requiring a running DHCP server for each subnetwork, and enhanced support for multicast and anycast traffic that allow for innovations in IoT systems.</p><p>That being said, the adoption of IPv6 has been slow. Standards for IPv6 were first introduced in 1995. According to the most recent adoption by country statistics reported by <a href="https://www.akamai.com/internet-station/cyber-attacks/state-of-the-internet-report/ipv6-adoption-visualization?ref=blog.jenningsga.com">Akamai</a> (as of June 2024), only a handful of countries have greater than 50% adoption rates, with the majority of countries having significantly lower rates.</p><p>The reasons for the slow adoption are complex. Resources on the Internet will want to be able to reach the most amount of users possible. The current Internet is dominated by IPv4 users, so it makes sense that services will prioritize IPv4 to maximize their reach. In addition, organizations and customers require that their cloud providers or ISPs provide them with sufficient prefixes and support for IPv6 utilization. On top of that, hardware vendors and networking software must be IPv6 aware to support it at every hop.</p><p>Some recently passed initiatives will accelerate the adoption of IPv6. The US Government has <a href="https://www.whitehouse.gov/wp-content/uploads/2020/11/M-21-07.pdf?ref=blog.jenningsga.com">put policies in place</a> to completely transition to IPv6-only deployments by 2025. It is recognized that IPv6 is the future and the government wants to fully transition its resources to IPv6 and minimize the impact of running dual-stack environments.</p><p>As the number of addresses in the IPv4 space dwindles, the cost of owning and allocating them is increasing. As of February 1st, 2024, <a href="https://aws.amazon.com/blogs/networking-and-content-delivery/identify-and-optimize-public-ipv4-address-usage-on-aws/?ref=blog.jenningsga.com">AWS has started to charge</a> tenants per public IPv4 address allocated to their resources. Supply and demand will dictate the price for these addresses. Organizations will begin to see higher operational costs for IPv4 which may push them to begin transitioning their environments to IPv6.</p><h1 id="conclusion">Conclusion</h1><p>We have covered many areas around the IPv6 architecture and looked into differences between it and its predecessor, IPv4. In doing so, realizing that it is straightforward and allows for true end-to-end networking, as the Internet had originally intended. We have looked into use cases for the different address types and how they can make for more efficient and reliable routing of our data. Finally, we&apos;ve looked into how to secure deployed IPv6 networks.</p><p>As IPv6 becomes increasingly adopted, we will see some more benefits and greater innovations further fueling the push to switch to the newest protocol. I for one am getting up on the bandwagon to welcome our new digital messiah!</p>]]></content:encoded></item><item><title><![CDATA[Automate Your Garage With Smart IOT Devices]]></title><description><![CDATA[A garage is a perfect place to get started with installing IOT devices for home automation.]]></description><link>https://blog.jenningsga.com/automate-your-garage/</link><guid isPermaLink="false">653e828c2027970001fbec51</guid><category><![CDATA[home automation]]></category><category><![CDATA[IOT]]></category><category><![CDATA[Home Assistant]]></category><category><![CDATA[Shelly]]></category><category><![CDATA[Garage]]></category><category><![CDATA[wifi]]></category><dc:creator><![CDATA[Patrick Jennings]]></dc:creator><pubDate>Sat, 04 Nov 2023 23:58:49 GMT</pubDate><media:content url="https://blog.jenningsga.com/content/images/2023/10/feature_image-1.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.jenningsga.com/content/images/2023/10/feature_image-1.jpg" alt="Automate Your Garage With Smart IOT Devices"><p>In this post, we will take a look at some easy to install devices, such as the Shelly 1. The device can be used to engage a typical garage door opener, sense whether the door is open or closed, and react to events within our garage. We will use the <a href="https://www.home-assistant.io/?ref=blog.jenningsga.com">Home Assistant</a> open source software to manage the devices and create a few automations we need to make our garage truly smart!</p><h1 id="making-a-garage-door-opener-smart-with-shelly-1">Making a Garage Door Opener Smart with Shelly 1</h1><figure class="kg-card kg-image-card"><img src="https://blog.jenningsga.com/content/images/2023/10/shelly-1-relay-1.jpg" class="kg-image" alt="Automate Your Garage With Smart IOT Devices" loading="lazy"></figure><p>The <a href="#products">Shelly 1</a> is a small yet very capable WI-FI operated relay switch. With this device, we can remotely trigger our garage door opener to open or close the door. In addition, we can wire in a contact switch in order to remotely detect the positioning of the door.</p><h2 id="powering-the-shelly-1">Powering the Shelly 1</h2><figure class="kg-card kg-image-card"><img src="https://blog.jenningsga.com/content/images/2023/10/shelly-power-supply.jpg" class="kg-image" alt="Automate Your Garage With Smart IOT Devices" loading="lazy"></figure><p>The Shelly 1 is capable of being powered by a 12V or 24-60V DC power supply. You should choose a power supply that is within the maximum voltage capabilities of your contact switch. In my case, my contact switch supports up to 28V DC. I had an unused 12VDC power supply lying around, so I repurposed it to power my Shelly 1 and contact switch.</p><p>The positive wire from the DC power supply should go to the <code>N+</code> terminal, while the negative wire goes into <code>L-</code>. Before turning on the device, make sure to set the jumper properly on the internal Shelly 1 contacts, which are under the removable cover. The Shelly 1 that I received was pre-configured for AC and not DC. </p><p>See the <a href="https://kb.shelly.cloud/knowledge-base/shelly-1?ref=blog.jenningsga.com#Shelly1(SHSW-1)-Basicwiringdiagrams">official wiring diagram</a> for more details on how to wire the device.</p><h2 id="configuring-the-shelly-1">Configuring the Shelly 1</h2><p>When powering on the Shelly 1 for the first time, the devices creates a WI-FI access point to connect to. You should see the WI-FI network as &quot;shelly1-XXX&quot;. Connect to this access point using a laptop or mobile device, and navigate to <a href="http://192.168.33.1/?ref=blog.jenningsga.com">192.168.33.1</a> to access the web UI to configure the device.</p><p>First, go to <code>Internet &amp; Security</code> and will out the options under <code>WI-FI Mode Client</code> to connect to your homes WI-FI network.</p><p>Under <code>Timers</code>, we want to configure the <code>Auto Off</code> setting. Set a value of 0.5 seconds. What this will do is when we have fully wired the device and it is remotely activated, the internal relay will close the circuit to the garage door opener. This triggers the opener to open or close. The Shelly 1 will automatically open the circuit after half a second, which will prevent the opener from continuously opening and closing the door.</p><p>Under <code>Settings</code>, set the <code>Power On Default Mode</code> to <code>Off</code>. And for <code>Button Type</code>, select <code>Detached Switch</code>. This will prevent changes to the state of the garage door sensor from triggering our garage door opener.</p><p>You may consider checking the <code>Reverse inputs</code> option as well. If your switch is the NC (Normally Closed) contact type, the Shelly 1 firmware will consider the switch &quot;open&quot; when the door is closed. Reversing the inputs will display &quot;closed&quot; in this case.</p><p>For more information, see the <a href="https://kb.shelly.cloud/knowledge-base/shelly-1-web-interface-guide?ref=blog.jenningsga.com#:~:text=To%20connect%20to%20your%20Shelly,Shelly%201%20to%20connect%20to.">Shelly 1 web interface guide</a>.</p><h1 id="connecting-the-shelly-1-to-the-garage-door-opener">Connecting the Shelly 1 to the Garage Door Opener</h1><figure class="kg-card kg-image-card"><img src="https://blog.jenningsga.com/content/images/2023/10/garage-door-opener-wiring.jpg" class="kg-image" alt="Automate Your Garage With Smart IOT Devices" loading="lazy"></figure><p>Connecting the Shelly 1 to the garage door opener is very simple. On the side of the opener, there are usually several terminals. Refer to your manual on exactly how these terminals are used to initiate opening and closing of the garage door. This is the description in my manual for manually testing the opener:</p><!--kg-card-begin: markdown--><blockquote>
<p>Connect the opener to a power source and short across the screw terminals labeled &quot;PB&quot; and &quot;COM&quot;...</p>
</blockquote>
<!--kg-card-end: markdown--><p>So, I connect the <code>COM</code> terminal on the garage door opener to the <code>I</code> terminal on the Shelly 1 (the <code>I</code> terminal represents the input load) and connect the <code>PB</code> terminal to the <code>O</code> terminal (which represents the output load).</p><p>Once connected, you should be able to go into the Shelly 1 web interface and trigger the door to open and close!</p><h1 id="determining-the-position-of-the-garage-door">Determining the Position of the Garage Door</h1><p>In order to determine whether the garage door is open or closed, we wire in a <a href="#products">magnetic contact switch</a>. This is also known as a reed switch and uses a detached magnet. For NC contact switches, when the magnet is near, the circuit will close allowing current to flow. When the magnet is moved away, the switch will open and the current will not flow. The Shelly 1 can detect the flow of current through the switch to determine the positioning of the door.</p><figure class="kg-card kg-image-card"><img src="https://blog.jenningsga.com/content/images/2023/10/garage-door-sensor.jpg" class="kg-image" alt="Automate Your Garage With Smart IOT Devices" loading="lazy"></figure><p>I found a convenient spot above my garage door, where I was able to screw in the wired switch. Then, I mounted the magnet portion on to the door itself in close proximity to the switch.</p><p>When the door is closed, the magnet is near and the circuit is complete. When the door is opened, the magnet moves away from the switch and the circuit is opened. We can not know how much the door is open. But this will give us a good indication whether the door is closed or not. In my testing, since the door opens both up and away from the wall, the magnet moves away from the switch rather quickly.</p><figure class="kg-card kg-image-card"><img src="https://blog.jenningsga.com/content/images/2023/10/sensor-wire.jpg" class="kg-image" alt="Automate Your Garage With Smart IOT Devices" loading="lazy"></figure><p>I ran the wires to the switch along the outside of the metal frame for the chain. My Shelly 1 is placed right above the door opener.</p><p>Wire in the switch to the <code>SW</code> terminal of the Shelly 1. And place the other wire for the switch into the <code>L-</code> terminal.</p><figure class="kg-card kg-image-card"><img src="https://blog.jenningsga.com/content/images/2023/10/Door-Open-Sensor.png" class="kg-image" alt="Automate Your Garage With Smart IOT Devices" loading="lazy"></figure><p>We will be able to tell whether the switch is working properly by going into the Shelly web interface. When the door is open, the middle of the button illuminates blue. Otherwise, this portion will be white.</p><h1 id="integration-with-home-assistant">Integration with Home Assistant</h1><p>At this point, we have a pretty functional way of opening and closing the garage door as well as seeing whether the door is closed or not. We can go further by using the <a href="https://www.home-assistant.io/integrations/shelly?ref=blog.jenningsga.com">Shelly integration</a> for Home Assistant.</p><p>After configuring the integration, we will have access to two entities in Home Assistant. One switch, named &quot;switch.shelly1_XXX&quot;, will allow us to engage the garage door opener to open or close. The other binary sensor, &quot;binary_sensor.shelly1_XXX&quot;, will allow us to sense the positioning of the door.</p><figure class="kg-card kg-image-card"><img src="https://blog.jenningsga.com/content/images/2023/10/Garage-HA-Dashboard.jpg" class="kg-image" alt="Automate Your Garage With Smart IOT Devices" loading="lazy"></figure><p>At this point, we can create a nice dashboard with our entities. The above is an example that I have configured in Home Assistant.</p><h1 id="alert-garage-door-is-open">Alert! Garage Door is Open!</h1><p>Have you ever come home to realize that you left your garage door open while you were away? Luckily, we are now able to use Home Assistant to alert us when the door has been left opened for a certain period of time.</p><p>Before we do this, we need to edit our Home Assistant, <code>configuration.yaml</code> to specify a <a href="https://www.home-assistant.io/integrations/notify/?ref=blog.jenningsga.com">notify</a> integration. There are many different options for sending notifications. I choose to use SMTP, since I can send an email as well as a text message to my AT&amp;T number using this method.</p><p>Here is an example for using Gmail. You may create an <a href="https://support.google.com/mail/answer/185833?ref=blog.jenningsga.com">App Password</a> to use with the configuration below.</p><pre><code class="language-yaml">notify:
  - name: email_notification
    platform: smtp
    server: smtp.gmail.com
    port: 587
    timeout: 15
    sender: &lt;replace&gt;@gmail.com
    sender_name: &quot;Home Assistant&quot;
    recipient: &lt;replace&gt;@gmail.com
    starttls: true
    username: &lt;replace&gt;@gmail.com
    password: &lt;replace&gt;</code></pre><p>Go to the <code>Settings -&gt; Automations</code> and create a new automation. You can switch to Edit in YAML in order to copy the following (replacing the entity_id with your corresponding Shelly 1 entity):</p><pre><code class="language-yaml">alias: Garage open notification
trigger:
  - platform: state
    entity_id:
      - binary_sensor.shelly1_XXX_input
    from: &quot;off&quot;
    to: &quot;on&quot;
    for:
      hours: 0
      minutes: 10
      seconds: 0
action:
  - service: notify.email_notification
    data:
      message: Garage door is open!
      title: Garage door is open!
      target: 111111111@mms.att.net
  - service: notify.email_notification
    data:
      message: Garage door is open!
      title: Garage door is open!
      target: some-email@example.com</code></pre><p>This will send an email and a text message after the garage door has been opened for more than 10 minutes.</p><p>Many mobile networks will allow emailing to a cell phone number through a specific carrier address. For example, AT&amp;T will allow sending an email as a text message using the address structure, <code>&lt;att-number&gt;@mms.att.net</code>.</p><p>If you receive this message, a good idea would be to have your smart phone configured so that you may remotely connect into your network. In order to access your Home Assistant remotely. Take a look at the following post for instructions on how to set that up with pfSense.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://blog.jenningsga.com/remote-vpn-server-pfsense/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Remote VPN Server with pfSense and a Dynamic IP Address</div><div class="kg-bookmark-description">How to configure a pfSense router for remote access using OpenVPN. These instructions will target residents who have a dynamic IP address.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://blog.jenningsga.com/content/images/size/w256h256/2020/01/favicon_jet.png" alt="Automate Your Garage With Smart IOT Devices"><span class="kg-bookmark-author">The Generally Available Blog</span><span class="kg-bookmark-publisher">Patrick Jennings</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://images.unsplash.com/photo-1555998322-9e293b1342da?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Automate Your Garage With Smart IOT Devices"></div></a></figure><h1 id="turning-on-lights-when-the-garage-is-opened">Turning On Lights When the Garage is Opened</h1><figure class="kg-card kg-image-card"><img src="https://blog.jenningsga.com/content/images/2023/10/kasa-motion-light-switch-1.jpg" class="kg-image" alt="Automate Your Garage With Smart IOT Devices" loading="lazy"></figure><p>I use a <a href="#products">Kasa motion sensor light switch </a>in my garage to activate and control my overhead LED lights. This light switch is also connected to Home Assistant. It activates well when motion is sensed in the garage, but it is much nicer to have the lights turn on as soon as the garage is opened instead of waiting for detected motion.</p><p>We can do this easily enough with the following Home Assistant automation.</p><pre><code class="language-yaml">alias: Garage Lights on when Garage opened
trigger:
  - platform: state
    entity_id:
      - binary_sensor.shelly1_XXX_input
    from: &quot;off&quot;
    to: &quot;on&quot;
action:
  - entity_id: switch.garage_lights
    service: switch.turn_on</code></pre><h1 id="nfc-tags-for-opening-the-garage">NFC Tags for Opening the Garage</h1><p>Home Assistant can <a href="https://www.home-assistant.io/integrations/tag/?ref=blog.jenningsga.com">program</a> <a href="#products">NFC Tags</a>, which can then be scanned and used for automations. We can use this to place a tag outside, which can then be scanned by a smartphone and trigger the garage door opener.</p><p>In Home Assistant, go to <code>Settings -&gt; Tags</code>. Create a new Tag. This will create a unique tag ID that only our Home Assistant installation will have any association with. What is being written to the NFC tag, will be a URL in the form of <a href="https://www.home-assistant.io/tag/%3Ctag%20id%3E?ref=blog.jenningsga.com">https://www.home-assistant.io/tag/&lt;tag id&gt;</a>. Only a device authenticated and connected to our Home Assistant can trigger this automation.</p><p>We can associate this tag with a new automation. That will trigger the garage door to open or close when the tag is scanned:</p><pre><code class="language-yaml">alias: Open Garage when Garage Door Tag is scanned
trigger:
  - platform: tag
    tag_id: &lt;tag-id&gt;
action:
  - service: switch.toggle
    target:
      entity_id: switch.shelly1_XXX</code></pre><p>With a mobile phone supporting NFC capabilities and using an official Home Assistant <a href="https://companion.home-assistant.io/?ref=blog.jenningsga.com">companion app</a>, we can write this tag to a programmable NFC tag.</p><p>As a side note, if you will be adhering this tag to a surface, you should first test that the tag scans properly beforehand against that surface. I have found that the tags will not scan as well, or even at all, when placed near metal objects, which may be near or behind the wall or surface the tags are placed on.</p><p>And that is it! Scanning the NFC tag will trigger the door to open or close.</p><h1 id="conclusion">Conclusion</h1><p>With a few connected IOT devices, a little bit of wiring, and some software configuration, we are able to make our garage smart and connected. This will enhance the safety of our home in addition to making it more convenient to operate. Finally, none of these devices require a connection outside of our home network or to external services, so we are preserving our privacy as well.</p><h1 id="products">Products</h1><p>If you are interested in the products that I used, here are some affiliated links to checkout.</p><!--kg-card-begin: html--><table>
<thead>
<tr>
<th>Product</th>
<th>Link</th>
</tr>
</thead>
<tbody>
<tr>
<td>Shelly 1 Relay Switch</td>
<td><a href="https://amzn.to/411D1c0?ref=blog.jenningsga.com">https://amzn.to/411D1c0</a></td>
</tr>
<tr>
<td>Heavy Duty Wired Garage Door Magnetic Contact Switch</td>
<td><a href="https://amzn.to/40lB1eh?ref=blog.jenningsga.com">https://amzn.to/40lB1eh</a></td>
</tr>
<tr>
<td>Kasa Smart Motion Sensor Switch</td>
<td><a href="https://amzn.to/3QkIuG3?ref=blog.jenningsga.com">https://amzn.to/3QkIuG3</a></td>
</tr>
<tr>
<td>Timeskey NFC Tags 20PCS NTAG 215 NFC Stickers</td>
<td><a href="https://amzn.to/3tSavwZ?ref=blog.jenningsga.com">https://amzn.to/3tSavwZ</a></td>
</tr>
<tr>
<td colspan="2">
<br> <br>
Here are some useful tools and products to help with wiring.</td>
</tr>
</tbody>
<thead>
<tr>
<th>Product</th>
<th>Link</th>
</tr>
</thead>
<tbody>
<tr>
<td>22AWG UL2464 Power Cable</td>
<td><a href="https://amzn.to/3QoVNoU?ref=blog.jenningsga.com">https://amzn.to/3QoVNoU</a></td>
</tr>
<tr>
<td>haisstronica 6PCS Crimping Tool Set</td>
<td><a href="https://amzn.to/3FF7XVA?ref=blog.jenningsga.com">https://amzn.to/3FF7XVA</a></td>
</tr>
<tr>
<td>TICONN 250Pcs Heat Shrink Wire Connectors</td>
<td><a href="https://amzn.to/3skprDF?ref=blog.jenningsga.com">https://amzn.to/3skprDF</a></td>
</tr>
<tr>
<td>Kuject 120PCS Solder Seal Wire Connectors</td>
<td><a href="https://amzn.to/45WXBLi?ref=blog.jenningsga.com">https://amzn.to/45WXBLi</a></td>
</tr>
<tr>
<td>60 PCS Adhesive Cable Clips</td>
<td><a href="https://amzn.to/3QIpv9u?ref=blog.jenningsga.com">https://amzn.to/3QIpv9u</a></td>
</tr>
</tbody>
</table><!--kg-card-end: html-->]]></content:encoded></item><item><title><![CDATA[Security Best Practices for OAuth 2.0]]></title><description><![CDATA[This post will follow the guidelines and best practices detailed in the Internet Engineering Task Force article entitled "OAuth 2.0 Security Best Current Practice". In relation to the open source IAM software Keycloak.]]></description><link>https://blog.jenningsga.com/security-best-practices-for-oauth-2-0/</link><guid isPermaLink="false">64ac5f7ad6682000017b2793</guid><category><![CDATA[keycloak]]></category><category><![CDATA[OAuth]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Patrick Jennings]]></dc:creator><pubDate>Wed, 12 Jul 2023 02:08:47 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1614064641938-3bbee52942c7?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDV8fHNlY3VyaXR5fGVufDB8fHx8MTY4OTEwNjg2Mnww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1614064641938-3bbee52942c7?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDV8fHNlY3VyaXR5fGVufDB8fHx8MTY4OTEwNjg2Mnww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="Security Best Practices for OAuth 2.0"><p>The OAuth framework has gained significant traction in our industry. Making it the leading method of authorization. OAuth provides groundwork for authorizing end-users as well as non-human services through the use of different <a href="https://oauth.net/2/grant-types/?ref=blog.jenningsga.com">grant types</a>. With such a flexible and widely used authoritative framework, it is important to keep up to date with the current best practices in order to keep our users and services secure.</p><p>This post will follow the guidelines and best practices detailed in the Internet Engineering Task Force article: <a href="https://datatracker.ietf.org/doc/html/draft-ietf-oauth-security-topics?ref=blog.jenningsga.com">OAuth 2.0 Security Best Current Practice</a><strong>.</strong></p><p>In addition, we will cover these topics as it applies to the open source IAM software, <a href="https://github.com/keycloak/keycloak?ref=blog.jenningsga.com">Keycloak</a>. You should have some basic knowledge of Keycloak, such as <a href="https://www.keycloak.org/docs/latest/server_admin/?ref=blog.jenningsga.com#configuring-realms">creating realms</a>, as well as have some basic understanding of OAuth.</p><h1 id="how-may-oauth-be-attacked">How May OAuth be Attacked?</h1><p>OAuth implementations are being attacked through a few weaknesses and anti-patterns.</p><p>This includes the usage of some of the legacy grant types, such as <strong>implicit grants</strong> and <strong>password grants</strong>. As well as bypassing intrinsic security defenses in place, like with so called &quot;<strong>Open Redirectors</strong>&quot;. Finally, the <strong>infrastructure</strong> and <strong>configuration</strong> of the authorization server (i.e. Keycloak) side may allow leakage of state, authorization codes, or at worst access tokens.</p><p>OAuth is being used in environments requiring higher security standards. Such as in the banking industry as well as in Government and other public services. Understanding the different types of grants and how to configure them to be as secure as possible is critical to these industries.</p><p>We will look at mitigating these known methods of attack below.</p><h1 id="oauth-metadata">OAuth Metadata</h1><p>The first recommendation is to publish OAuth Metadata, such as defined in <a href="https://www.rfc-editor.org/rfc/rfc8414.html?ref=blog.jenningsga.com">RFC8414</a>. And that clients make use of this metadata to configure themselves instead of relying on static configurations.</p><p>Keycloak does this automatically by exposing the OAuth metadata in a standard location:</p><ul><li>https://&lt;keycloak-host&gt;/realms/&lt;realm&gt;/.well-known/openid-configuration</li></ul><p>This includes important fields and authorization server capabilities, such as the <code>authorization_endpoint</code> and <code>token_endpoint</code>. In addition, the <code>jwks_uri</code>, which links to a URI which lists the public keys used by Keycloak to sign JWTs.</p><p>To learn more about the use of this metadata and connecting client libraries, see the Keycloak documentation below:</p><p><a href="https://www.keycloak.org/docs/latest/securing_apps/?ref=blog.jenningsga.com#endpoints">https://www.keycloak.org/docs/latest/securing_apps/#endpoints</a></p><h1 id="implicit-grants">Implicit Grants</h1><figure class="kg-card kg-image-card"><img src="https://blog.jenningsga.com/content/images/2023/07/implicit-flow.png" class="kg-image" alt="Security Best Practices for OAuth 2.0" loading="lazy"></figure><p>With Implicit grants, the access token is passed through in the authorization response directly. This means that the tokens would be passed in through the redirection URLs directly. Exposing greater attack surfaces and other attack vectors such as:</p><ul><li>An access token may be saved in browser history.</li><li>If an attacker is able to redirect to a URI under their control (such as through an &#x201C;open redirector&#x201D;), they are able to get direct access to the access token.</li></ul><p>Thus, this grant type should not be used. And instead authorization code grant type should be used instead.</p><h1 id="resource-owner-password-credentials-grant">Resource Owner Password Credentials Grant</h1><figure class="kg-card kg-image-card"><img src="https://blog.jenningsga.com/content/images/2023/07/direct-access-grant.png" class="kg-image" alt="Security Best Practices for OAuth 2.0" loading="lazy"></figure><p>The Resource Owner Password Credentials (ROPC) grants, also known as Direct access grants in Keycloak, is another legacy grant type that should not be used in production environments.</p><p>This legacy grant type was originally intended to be used to migrate existing legacy applications and services, that were already handling user credentials directly, to the OAuth framework.</p><p>ROPC has numerous security problems, including:</p><ul><li>Insecurly exposes credentials of the resource owner to the client.</li><li>Users are trained to enter their credentials in places other than the authorization server.</li><li>Causes problems when trying to implement two-factor auth, WebAuthn, WebCrypto, or any other multi-step authentication process.</li></ul><h1 id="authorization-code-grant">Authorization Code Grant</h1><p>Authorization Code grants are initiated by browser based applications, generally by human or human-like user agents. The user agent is redirected to the authorization server, where they will be authenticated and authorized, and redirected back to the application with an &quot;authorization code&quot;. Which can be exchanged with the authorization server for an identity and access token. </p><p>In Keycloak, we can create a new client with &quot;Standard flow&quot; checked in order to enable this grant type.</p><figure class="kg-card kg-image-card"><img src="https://blog.jenningsga.com/content/images/2023/07/auth-code-client-create.png" class="kg-image" alt="Security Best Practices for OAuth 2.0" loading="lazy"></figure><h2 id="wildcard-redirect-uris">Wildcard Redirect URIs</h2><p>Registering redirect URIs is a fundamental validation layer with the authorization code grant. The authorization server will detect that a valid redirect URI is passed in when performing this auth flow.</p><p>OAuth is being used in much more dynamic environments. At inception, OAuth had actually assumed more static relationships between the client, authorization server, and resource servers.</p><p>Wildcard redirect URI patterns emerged as a means of allowing more dynamic client configurations. Unfortunately, these ambiguous patterns pose a problem due to the more complex implementation and are more error prone to manage.</p><p>Let&apos;s look at the following redirection URI in Keycloak:</p><figure class="kg-card kg-image-card"><img src="https://blog.jenningsga.com/content/images/2023/07/valid-redirect-uri-pattern.png" class="kg-image" alt="Security Best Practices for OAuth 2.0" loading="lazy"></figure><p>This <code>*</code> suffix allows any URI prefixed with &quot;https://somesite.example/&quot; to be allowed to redirect to after a successful login. Valid URIs that match this rule include:</p><ul><li>Any URL path, such as <a href="https://somesite.example/apis/users?ref=blog.jenningsga.com">https://somesite.example/apis/users</a>.</li><li>Any query parameters, such as <a href="https://somesite.example/?utm_campaign=blah&amp;ref=blog.jenningsga.com">https://somesite.example?utm_campaign=blah</a>.</li><li>Any fragments in the URL, such as <a href="https://somesite.example/?ref=blog.jenningsga.com#title">https://somesite.example#title</a></li></ul><p>If one is not familiar with how a particular OAuth implementation validates these redirect URI patterns, this may open the clients to exploitation.</p><p>For example, let&apos;s assume someone is wanting to allow redirects to a particular host and to any port on that host. They might incorrectly consider the following URI pattern:</p><ul><li>https://somesite.example*</li></ul><p>What they might not realize is that this may open them up to redirects to a host under an attackers control, such as: &quot;https://somesite.example.attacker&quot;.</p><p>In general, the IETF article states that one should prefer to use static redirect URIs, without any wildcard, because of issues outline above.</p><p>There is one exception. For port numbers in localhost redirect URIs for native apps. In Keycloak, you may use the <a href="https://www.keycloak.org/docs/latest/securing_apps/?ref=blog.jenningsga.com#redirect-uris">special redirect URI</a> &quot;http://127.0.0.1&quot;. Which will also allow a redirect uri to localhost on any port.</p><p>Keycloak is looking to improve this user experience by the introduction of specific client policies. So there would be a client policy that would enable, for instance, wildcard matching only on subdomains or matching only on URL fragments.</p><p>You can see the ongoing discussion in the community below.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/keycloak/keycloak/discussions/9278?ref=blog.jenningsga.com"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Clients policies for wildcards in redirect-uris &#xB7; keycloak/keycloak &#xB7; Discussion #9278</div><div class="kg-bookmark-description">Keycloak currently allows wildcards in the redirect-uri by default, and allows any scheme to be used in redirect URLs as well. Some examples of valid redirect-uris that can be configured for client...</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.com/fluidicon.png" alt="Security Best Practices for OAuth 2.0"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">keycloak</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/f983ef8f340ed4597ca5410107b2d7318e75ea8f7b03537b053b69d8df4407de/keycloak/keycloak/discussions/9278" alt="Security Best Practices for OAuth 2.0"></div></a></figure><h2 id="open-redirectors">Open Redirectors</h2><p>An open redirector is a specific endpoint that forwards a users browser to any arbitrary URI. For example, a URI may be put in a query parameter in which is used as a forwarding redirection URI to the authorization server:</p><ul><li><a href="https://some.example/?redirect_to=https%3A%2F%2Fother.example&amp;ref=blog.jenningsga.com">https://some.example?redirect_to=https%3A%2F%2Fother.example</a></li></ul><p>Open redirectors allow an attacker to construct URIs pointing to resources that they may control. Exposing the authorization code, in the case when authorization code grant is used, or the access token, if the legacy implicit grant is used.</p><h2 id="proof-key-for-code-exchange">Proof Key for Code Exchange</h2><p>It is advised for public clients using authorization code flow to use <a href="https://datatracker.ietf.org/doc/html/rfc7636?ref=blog.jenningsga.com">Proof Key for Code Exchange (PKCE)</a>.</p><p>Although PKCE was designed as a mechanism to protect native apps, this advice applies to all kinds of OAuth clients, including web applications. For confidential clients (where the &quot;client authentication&quot; option is enabled in Keycloak), the use of PKCE is also recommended. As it may prevent CSRF and mitigate authorization code interception attacks.</p><figure class="kg-card kg-image-card"><img src="https://blog.jenningsga.com/content/images/2023/07/Keycloak---PKCE-settings.png" class="kg-image" alt="Security Best Practices for OAuth 2.0" loading="lazy"></figure><p>PKCE can be enforced, and configured to prevent <a href="https://datatracker.ietf.org/doc/html/draft-ietf-oauth-security-topics?ref=blog.jenningsga.com#section-4.8">PKCE downgrade attacks</a>, for a client in Keycloak by enabling a &quot;Proof Key for Code Exchange Code Challenge Method&quot; under the &quot;Advanced Settings&quot;. The use of PKCE mode &quot;S256&quot; as the code challenge method is highly encourage.</p><h1 id="client-credentials-grants">Client Credentials Grants</h1><p>Client credentials grants are often used to interconnect services. And are generally authorized using a client id and client secret.</p><p>In Keycloak, these are often considered Service Accounts. And enabling this grant type can be done by checking the &quot;Client authentication&quot; (which generates a confidential client secret), as well as checking the &quot;Service accounts roles&quot; during client creation.</p><figure class="kg-card kg-image-card"><img src="https://blog.jenningsga.com/content/images/2023/07/client-credentials-grant.png" class="kg-image" alt="Security Best Practices for OAuth 2.0" loading="lazy"></figure><h2 id="recommendations-for-client-credentials">Recommendations for Client Credentials</h2><p>By default, Keycloak generates a random client secret that is sufficiently complicated. The IETF article recommended to use asymmetric methods, such as:</p><ul><li><a href="https://www.rfc-editor.org/rfc/rfc8705.html?ref=blog.jenningsga.com">mTLS</a></li><li><a href="https://www.rfc-editor.org/info/rfc7523?ref=blog.jenningsga.com">Signed JWTs</a></li></ul><figure class="kg-card kg-image-card"><img src="https://blog.jenningsga.com/content/images/2023/07/service-account-client-credentials.png" class="kg-image" alt="Security Best Practices for OAuth 2.0" loading="lazy"></figure><p>Keycloak supports both of these types of client authentication. Certificates for signed JWTs can be specified by importing, generating, or queried by defining a JWKS URL, in the &quot;Keys&quot; tab. </p><p>X509 Certificates may be configured by following:</p><ul><li><a href="https://www.keycloak.org/docs/latest/server_admin/?ref=blog.jenningsga.com#_x509">https://www.keycloak.org/docs/latest/server_admin/#_x509</a></li></ul><h2 id="preventing-impersonation-of-resource-owners">Preventing Impersonation of Resource Owners</h2><p>Resource servers may make access control decisions based on the identity provided by the authorization server.</p><p>It may be possible to impersonate a particular resource owner as a service account. If for example, client id were to be able to be specified by the client. It may be possible to impersonate a particular resource user by a certain service account.<br></p><p>The administrators of an authorization server should not allow clients to influence their client id or any claim that could cause confusion with a genuine resource owner.<br></p><p>If this cannot be avoided, authorization servers must provide other means for the resource server to distinguish between the two types of access tokens.</p><p>One may configure service accounts in Keycloak to have any client id they wish. It is up to the administrator to establish a robust client id generation schema that prevents this type of impersonation.</p><h1 id="security-of-refresh-tokens">Security of Refresh Tokens</h1><p>The purpose of a refresh token is to allow the access token to be short lived. And allow for requesting of new access tokens, once they have expired.</p><p>As advised by the IETF article, issuing a refresh token is optional and should be done at the discretion of the authorization server. In addition, the following conditions should be followed:</p><blockquote>The authorization server MAY issue a new refresh token, in which case the client MUST discard the old refresh token and replace it with the new refresh token. &#xA0;The authorization server MAY revoke the old refresh token after issuing a new refresh token to the client. &#xA0;If a new refresh token is issued, the refresh token scope MUST be identical to that of the refresh token included by the client in the request.</blockquote><p>If refresh tokens are issued, those refresh tokens must be bound to the same scope and resource servers (see below regarding audience restricted access tokens) as originally consented by the resource owner.</p><p>Refresh tokens for public clients must be either sender-constrained or use refresh token rotation, as described below, in order to detect compromise:</p><blockquote>The authorization server issues a new refresh token with every access token refresh response. The previous refresh token is invalidated but information about the relationship is retained by the authorization server. If a refresh token is compromised and subsequently used by both the attacker and the legitimate client, one of them will present an invalidated refresh token, which will inform the authorization server of the breach. The authorization server cannot determine which party submitted the invalid refresh token, but it will revoke the active refresh token. This stops the attack at the cost of forcing the legitimate client to obtain a fresh authorization grant.</blockquote><h1 id="access-tokens">Access Tokens</h1><h2 id="access-token-privilege-restrictions">Access Token Privilege Restrictions</h2><p>Access tokens represent the authorized privileges and accesses granted on behalf of a user. And as such, must be kept confidential and as short lived as possible.</p><p>According to the current best practices, the privileges associated with an access token should be restricted to the minimum required for the particular application or use case. In addition, the resource servers should validate that a given access token is meant for a particular means requested. This should be done through the following:</p><ul><li>Through audience-restricted access tokens. And/or through the use of client scopes.</li><li>Or through authorization_details of <a href="https://www.rfc-editor.org/rfc/rfc9396.html?ref=blog.jenningsga.com">RFC9396</a>. Which defines a structured mechanism for specifying fine-grained authorization requirements.</li></ul><h2 id="authorization-request-and-access-token-leakage">Authorization Request and Access Token Leakage</h2><p>Contents of authorization request or response URIs may be leaked unintentionally through the <code>Referer</code> header. Suppression of the <code>Referer</code> header should be done by applying an appropriate Referrer Policy. Keycloak will automatically set the policy to <code>no-referrer</code> in order to prevent this type of leakage.</p><p>An authorization code may end up in a user&apos;s browser history. Attackers may learn state from the authorization server if it contains links or third-party content. Or a client may leak the authorization response, if it includes third-party content.</p><p>To mitigate the effects of a leaked token, sender constrained access tokens and audience-restriction should be enforced.</p><h3 id="sender-constrained-access-tokens">Sender-Constrained Access Tokens</h3><p>A sender-constrained access token provides methods to prevent misuse of leaked access tokens.<br></p><p>A sender-constrained access token scopes the applicability of an access token to a certain sender. This sender must prove that they were the original recipient of the token for the acceptance of that token at the resource server.<br></p><p>The resource server is the one to preform proof of possession check. This can be done using <a href="https://www.rfc-editor.org/rfc/rfc8705.html?ref=blog.jenningsga.com">mutual-TLS client authentication and certificate-bound access tokens</a>.</p><h3 id="audience-restricted-access-tokens">Audience Restricted Access Tokens</h3><p>The <a href="https://datatracker.ietf.org/doc/html/rfc7519?ref=blog.jenningsga.com#section-4.1.3">audience</a> may be associated with a particular access token. In order to restrict the token to ideally one or maybe more resource servers. Audience restrictions limit the impact of token leakage.</p><p>In deployments where the authorization server knows the URLs of all resource servers, the authorization server may just refuse to issue access tokens for unknown resource server URLs.</p><p>Audience restrictions have benefits beyond token leakage mitigation. But allows the authorization server to create access tokens with differing claims based on the resource server specified.</p><p>Keycloak clients can be configured to provide audience through mappers. See the documentation around <a href="https://www.keycloak.org/docs/latest/server_admin/?ref=blog.jenningsga.com#audience-support">Audience Support</a> for more information.</p><h1 id="authorization-server-infrastructure">Authorization Server Infrastructure</h1><p>It is recommended to use end-to-end TLS. Which means complete encryption between client, authorization server, and any reverse proxy server in-between. </p><p>A reverse proxy must sanitize any inbound requests to ensure the authenticity and integrity of all header values relevant for the security of the authorization servers. For example, the <code>X-Forwarded-For</code> header may be used to indicate the address of a connecting client.</p><p>A list of headers that should be sanitizing, when operating Keycloak in one of its proxy modes, can be found below:</p><ul><li><a href="https://www.keycloak.org/server/reverseproxy?ref=blog.jenningsga.com">https://www.keycloak.org/server/reverseproxy</a></li></ul><p></p><p><strong>Cross-Origin Resource Sharing</strong> (CORS) may be enabled on the following endpoints:</p><ul><li>Token endpoint</li><li>Authorization server metadata endpoint</li><li>jwks_uri endpoint</li><li>Dynamic client registration endpoint</li></ul><p>CORS should not be used on the authorization endpoint. As the client never accesses this endpoint directly, and only ever redirects users to it.</p><figure class="kg-card kg-image-card"><img src="https://blog.jenningsga.com/content/images/2023/07/web-origins---cors.png" class="kg-image" alt="Security Best Practices for OAuth 2.0" loading="lazy"></figure><p>Keycloak allows one to specify any number of CORS origins through the &quot;Web origins&quot; client configuration.</p><p>And there is open discussion for customization of the <code>Access-Control-Allow-Headers</code> returned by preflight requests in the issue below:</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/keycloak/keycloak/issues/12682?ref=blog.jenningsga.com"><div class="kg-bookmark-content"><div class="kg-bookmark-title">[CORS] Allow Access-Control-Allow-Headers customization &#xB7; Issue #12682 &#xB7; keycloak/keycloak</div><div class="kg-bookmark-description">Description Allow customization for CORS Access-Control-Allow-Headers Discussion https://keycloak.discourse.group/t/customizing-access-control-allow-headers/7672 Motivation Keycloak has to support ...</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.com/fluidicon.png" alt="Security Best Practices for OAuth 2.0"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">keycloak</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/a6cb9e1f835a45b31b652c3042d45590b50bbc9108c324a8051b07c4a6d70cc2/keycloak/keycloak/issues/12682" alt="Security Best Practices for OAuth 2.0"></div></a></figure><h1 id="cross-site-request-forgery">Cross Site Request Forgery</h1><p>The IETF article states:</p><blockquote>Clients must prevent Cross-Site Request Forgery (CSRF). CSRF refers to requests to the redirection endpoint that do not originate at the authorization server, but a malicious third party.</blockquote><p>A client may rely on the CSRF protection provided by PKCE. In which case, PKCE must be enforced and not be able to downgrade (see above for directions on configuring that on a client level in Keycloak).</p><p>Keycloak prevents CSRF attacks against the login and user account portions, using the state parameter, as described in the documentation:</p><ul><li><a href="https://www.keycloak.org/docs/latest/server_admin/?ref=blog.jenningsga.com#csrf-attacks">https://www.keycloak.org/docs/latest/server_admin/#csrf-attacks</a></li></ul><h1 id="clickjacking">Clickjacking</h1><p>An attacker may embed the authorization endpoint user interface in an innocuous context, such as an iframe.<br></p><p>IETF article states that the authorization servers must prevent clickjacking attacks.</p><ul><li>By specifying the <code>X-Frame-Options</code> HTTP response header.</li><li>By utilizing Content Security Policy (CSP) level 2 [<a href="https://www.w3.org/TR/CSP2?ref=blog.jenningsga.com">CSP-2</a>] or greater.</li><li>&#x2003;Older browsers do not all support the required CSP levels to prevent iframe clickjacking. So JavaScript based framebusting techniques should be used.</li><li>Authorization servers should allow for configuring specific allowed origins for particular clients.</li></ul><p>Keycloak prevents Clickjacking by setting CSP and <code>X-Frame-Options</code> headers as described in:</p><ul><li><a href="https://www.keycloak.org/docs/latest/server_admin/?ref=blog.jenningsga.com#clickjacking">https://www.keycloak.org/docs/latest/server_admin/#clickjacking</a></li></ul><h1 id="conclusion">Conclusion</h1><p>As we have seen, the OAuth framework provides a lot of types of grants for the many use cases applications need in order to authorize and authenticate users and services. &#xA0;Unfortunately, there are a lot of configuration options which may allow for insecure methods of exchange between user agent, resource server, or authorization server. But we have at IETF&apos;s article regarding OAuth 2.0 best practices. And have applied some of this knowledge to prevent common attack attempts.</p><p>Hope this has been informative and insightful. For more information, refer to these wonderful and insightful RFC documents below.</p><ul><li><a href="https://www.rfc-editor.org/info/rfc6819?ref=blog.jenningsga.com">RFC6819 - OAuth 2.0 Threat Model and Security Considerations</a></li><li><a href="https://www.rfc-editor.org/rfc/rfc6749.html?ref=blog.jenningsga.com#section-7">Accessing Protected Resources</a></li><li><a href="https://www.rfc-editor.org/rfc/rfc8705.html?ref=blog.jenningsga.com">RFC8705 - OAuth 2.0 Mutual-TLS Client Authentication and Certificate-Bound Access Tokens</a></li><li><a href="https://www.rfc-editor.org/rfc/rfc8707.html?ref=blog.jenningsga.com">RFC8707 - Resource Indicators for OAuth 2.0</a></li><li><a href="https://www.rfc-editor.org/info/rfc8414?ref=blog.jenningsga.com">RFC8414 - OAuth 2.0 Authorization Server Metadata</a></li></ul>]]></content:encoded></item><item><title><![CDATA[Home Networking Hardware Update]]></title><description><![CDATA[As 10G and Multi-Gig capable hardware becomes more affordable, I discuss how I am transitioning my home network to these newer and faster technologies.]]></description><link>https://blog.jenningsga.com/home-networking-hardware-update/</link><guid isPermaLink="false">632052e22a16870001d0a0e1</guid><category><![CDATA[pfsense]]></category><category><![CDATA[Proxmox]]></category><category><![CDATA[Networking]]></category><category><![CDATA[nbaset]]></category><category><![CDATA[multigig]]></category><category><![CDATA[10g]]></category><category><![CDATA[Google Fiber]]></category><dc:creator><![CDATA[Patrick Jennings]]></dc:creator><pubDate>Wed, 18 May 2022 01:32:02 GMT</pubDate><media:content url="https://blog.jenningsga.com/content/images/2022/05/networking-hardware-logo.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.jenningsga.com/content/images/2022/05/networking-hardware-logo.jpg" alt="Home Networking Hardware Update"><p>As 10G and Multi-Gig capable hardware becomes more affordable, it made sense to do some upgrades to my home networking in order to transition to these newer and faster technologies. Many routers, switches, and Wi-Fi devices these days are supporting the <a href="https://en.wikipedia.org/wiki/2.5GBASE-T_and_5GBASE-T?ref=blog.jenningsga.com">IEEE 802.3bz</a> standard. This standard allows for increased networking speeds of up to 2.5Gbps or 5Gbps over a typical 1Gbps connection. All while utilizing standard Cat5e cabling found in many residencies and offices. In this post, I&apos;ll go over some of the hardware changes since my last post, <a href="https://blog.jenningsga.com/virtualize-pfsense-with-google-fiber-a-dream-networking-stack/">Virtualize pfSense for Google Fiber - A Dream Networking Stack</a>, to the core networking for my home in order to support these advanced technologies. As well as some configuration changes to make the networks secure and reliable.</p><h1 id="hardware-details">Hardware Details</h1><figure class="kg-card kg-image-card"><img src="https://blog.jenningsga.com/content/images/2022/05/enclosed-networking-rack.jpg" class="kg-image" alt="Home Networking Hardware Update" loading="lazy"></figure><p>All of my networking equipment sits in a closet on my ground floor encased in a <a href="#products">NavePoint 9U network rack enclosure</a>. From top to bottom, we have the following:</p><ul><li><a href="#products">Patch Panel 24 Port Cat6A with Inline Keystone</a></li><li><a href="#products">Netgear 10-Port 10G / Multi-Gigabit (MS510TXUP)</a></li><li><a href="#products">Lenovo ThinkCentre M920x Tiny</a></li><li><a href="#products">Synology DS1821+ 8 Bay NAS</a></li><li><a href="#products">CyberPower OR500LCDRM1U 500VA/300W UPS</a></li></ul><p>The AC powered fans that came with the NavePoint enclosure were too loud for my liking, so I opted to purchase and replace with <a href="#products">AC Infinity S7-P Dual 120mm fans</a>. The speed on these fans can be adjusted using a controller.</p><h2 id="lenovo-thinkcentre-m920x-tiny">Lenovo ThinkCentre M920x Tiny</h2><p>I upgraded from a <a href="#products">Qotom Q355G4</a> to a <a href="#products">Lenovo M920x Tiny</a> for use as my pfSense router and firewall appliance. This little 1L PC is capable of some serious computing. It was manufactured from Lenovo with a 6 core Intel i7-8700 processor and a 128GB NVMe SSD. I opted for the lowest RAM possible and swapped that with 32GB of my own. The unit can handle two NVMe solid state drives, so I installed another for more space and for some redundancy in the form of periodic backups.</p><figure class="kg-card kg-image-card"><img src="https://blog.jenningsga.com/content/images/2022/05/m920x-ram-ssd.jpg" class="kg-image" alt="Home Networking Hardware Update" loading="lazy"></figure><p>The specs of this tiny PC are quite impressive and include the following:</p><!--kg-card-begin: markdown--><ul>
<li>Intel Core i7-8700 vPro (6 core / 12 thread, 3.20GHz up to 4.60GHz with Turbo Boost)</li>
<li>32GB DDR4 2666MHz (SO-DIMM)</li>
<li><a href="#products">SK hynix Gold P31 1TB PCIe NVMe SSD</a></li>
<li>128GB NVMe SSD</li>
<li><a href="#products">SuperMicro AOC-STGN-I2S Dual Port 10GB SFP+ NIC</a></li>
</ul>
<!--kg-card-end: markdown--><h3 id="upgrading-to-10g-routing">Upgrading to 10G Routing</h3><figure class="kg-card kg-image-card"><img src="https://blog.jenningsga.com/content/images/2022/05/m920s-sfp-.jpg" class="kg-image" alt="Home Networking Hardware Update" loading="lazy"></figure><p>I originally configured my Lenovo M920x Tiny with an Intel i340-t4 and had been using this with a similar configuration as my Qotom unit. I found that transitioning this unit to 10G was quite easy and cost effective. And many of the parts can be found on Ebay for cheap. To do this upgrade, I did the following:</p><ul><li>Swapped to the <a href="#products">01AJ940</a> PCI-E x16 riser card (originally only a x4 riser card was installed).</li><li>Installed the <a href="#products">SuperMicro AOC-STGN-I2S NIC</a></li><li>Purchased a <a href="https://www.reddit.com/r/homelabsales/comments/krf5w3/fsusnj_mellanox_cx322a_sfp_and_i350t4_kits_for/?ref=blog.jenningsga.com">3D printed bracket</a> from a fellow Redditor.</li></ul><figure class="kg-card kg-image-card"><img src="https://blog.jenningsga.com/content/images/2022/05/AOC-STGN-I2S-Bracket.jpg" class="kg-image" alt="Home Networking Hardware Update" loading="lazy"></figure><p>The 3D printed bracket is not a required item. But the PCI bracket used has to be specific for these Lenovo Tiny devices. Not just any generic low-profile bracket will work with the case. You may remove the bracket all together. That said, it makes the network card much more stable in the case. Preventing it from moving when inserting and removing transceivers from the SFP+ ports.</p><p>My hypervisor (Proxmox) is installed on the 1TB NVMe drive. The other drive is used to store periodic backups of the running VMs.</p><h2 id="netgear-ms510txup">Netgear MS510TXUP</h2><figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.jenningsga.com/content/images/2022/05/MS510TXUP.jpg" class="kg-image" alt="Home Networking Hardware Update" loading="lazy"></figure><p>For my root router, I converted from a <a href="#products">Ubiquiti UniFi Switch 8-Port 150W (US-8-150W)</a> to a <a href="#products">Netgear 10-Port 10G / Multi-Gigabit (MS510TXUP)</a> switch. This is a pretty interesting switch as it offers four 1G/2.5G/5G/10G Ethernet ports, another four 1G/2.5G Ethernet ports, as well as two SFP+ 1G/10G ports. This allows transitioning the network as my hardware on the network supports these newer higher speeds.</p><p>In addition, all 8 Ethernet ports have Power Over Ethernet capabilities utilizing IEEE 802.3bt (Type 3) POE++. Each port can provide a maximum power output of 60W. With the switch capable of providing up to 295W of power in total!</p><p>This Netgear switch can either be configured using a management UI served through the device. Or through a cloud interface similar to the Unifi controller. Unlike the Unifi product line, the cloud service provided by Netgear, <a href="https://www.netgear.com/business/services/insight/?ref=blog.jenningsga.com">Netgear Insight</a>, is a paid service. But it does seem priced reasonable well for home users at $9.99 / year, or $22 / year for the Pro service. I decided to forgo the payments and manage through the device.</p><h1 id="network-details">Network Details</h1><p>Generally, a conventional router will have a WAN and one or more LAN ports. My setup is a bit different in that both WAN and LAN traffic are transmitted through the same network port. In order to achieve isolation, I use VLANs to segment the traffic through the root switch.</p><p>Here is a table matrix of the physical ports on the root switch.</p><!--kg-card-begin: markdown--><table>
<thead>
<tr>
<th>Port</th>
<th>Description</th>
<th>Physical Status</th>
<th>Link Status</th>
<th>Frame Size</th>
</tr>
</thead>
<tbody>
<tr>
<td>mg1</td>
<td>Google&#xA0;Fiber</td>
<td>1000 Mbps Full Duplex</td>
<td>Link Up</td>
<td>1522</td>
</tr>
<tr>
<td>mg2</td>
<td>2ndFloorSpareBedroom</td>
<td>1000 Mbps Full Duplex</td>
<td>Link Up</td>
<td>1522</td>
</tr>
<tr>
<td>mg3</td>
<td>1stFloorFamilyRoom</td>
<td>1000 Mbps Full Duplex</td>
<td>Link Up</td>
<td>1522</td>
</tr>
<tr>
<td>mg4</td>
<td></td>
<td></td>
<td>Link Down</td>
<td>1522</td>
</tr>
<tr>
<td>xmg5</td>
<td>3rdFloorLab</td>
<td>1000 Mbps Full Duplex</td>
<td>Link Up</td>
<td>1522</td>
</tr>
<tr>
<td>xmg6</td>
<td></td>
<td></td>
<td>Link Down</td>
<td>1522</td>
</tr>
<tr>
<td>xmg7</td>
<td>Office</td>
<td>2.5 Gbps Full Duplex</td>
<td>Link Up</td>
<td>1522</td>
</tr>
<tr>
<td>xmg8</td>
<td>NAS</td>
<td>10 Gbps Full Duplex</td>
<td>Link Up</td>
<td>1522</td>
</tr>
<tr>
<td>xg9</td>
<td></td>
<td></td>
<td>Link Down</td>
<td>1522</td>
</tr>
<tr>
<td>xg10</td>
<td>pfSense</td>
<td>10 Gbps Full Duplex</td>
<td>Link Up</td>
<td>1522</td>
</tr>
</tbody>
</table>
<!--kg-card-end: markdown--><p>The first port is used for egress and ingress traffic through my ISP, Google Fiber. Ports 2-8 are used for local traffic. Finally, we have the SFP+ ports, 9-10. Which are dedicated to routing. Currently, I only connect one 10G port to my router, which is plenty for my routing needs. In the future, I could connect another SFP+ cable and configure link aggregation for a theoretical speed of up to 20G.</p><p>I&apos;m still in the process of upgrading the cables in my home. I hope to have multi-gig or 10G available to my lab in the near future. When WiFi 6E becomes reasonably priced, I plan on upgrading my two access points, on ports mg2 and mg3, and will have plenty of bandwidth capacity and power for those.</p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.jenningsga.com/content/images/2022/05/netgear-poe.png" class="kg-image" alt="Home Networking Hardware Update" loading="lazy"></figure><p>Here is a breakdown of the status of POE from the switch management UI. The first port, mg1, powers the Google fiber jack. The following two ports power my Unifi Access points. One great feature of this Netgear switch is that you can power cycle the POE from the UI. So if an access point is acting up, you can easily cut power to it and have it restart from your browser. In addition, you can monitor power consumption of the POE devices.</p><h2 id="vlans-and-subnets">VLANs and Subnets</h2><p>I use VLANs and subnets to isolate and control cross traffic for all of my networks. The benefit of this is that I can configure which devices can communicate with one another through firewall rules. As well as specify which interfaces traffic is able to propagate through.</p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.jenningsga.com/content/images/2022/05/Home-Networking-Page-1.svg" class="kg-image" alt="Home Networking Hardware Update" loading="lazy"></figure><p>For firewall rules, I completely isolate the guest WiFi from the rest of the other networks. In addition, I have some rules which establish trust boundaries on what devices are able connect to my management network. The management network consists of all the BMCs and OOB management devices and accesses to those remote services. This includes the management IPs for the Unifi access points, managements UIs for switches, <a href="https://en.wikipedia.org/wiki/Dell_DRAC?ref=blog.jenningsga.com">iDRAC</a>s, and <a href="https://en.wikipedia.org/wiki/IBM_Remote_Supervisor_Adapter?ref=blog.jenningsga.com#Integrated_Management_Module_(IMM)">IMM</a>s. Finally, the IOT network, which includes a secured home surveillance system, has a set of whitelisted IPs that can access the devices. These devices cannot reach out to the internet themselves.</p><p>A pfSense package I use to supplement security rules is <a href="https://docs.netgate.com/pfsense/en/latest/packages/pfblocker.html?ref=blog.jenningsga.com">pfBlockerNG</a>. Which allows you to set country specific GeoIP block lists. I decided to select every country which the United States has current <a href="https://en.wikipedia.org/wiki/United_States_sanctions?ref=blog.jenningsga.com">sanctions</a> with and block them on all interfaces.</p><h3 id="google-fiber-ipv6">Google Fiber IPv6</h3><p>Google Fiber will give out a /56 prefixed block of IPv6 IPs for delegation. I found this to be a little tricky to setup. The settings in pfSense can be finicky and require some playing around with. And I have experienced many instances where restarting my router has caused me to lose my IPv6 reservation. The only way I have found to get a new reservation is to power cycle the Google fiber jack itself.</p><p>That being said, when I do have an IPv6 reservation, I subdivide that /56 prefix into separate /64 prefixed networks. This allows for up to 256 different /64 prefix networks. Which is just an astonishing amount of contiguous IPs coming from IPv4!</p><p>Here is what my WAN DHCP6 setting look like.</p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.jenningsga.com/content/images/2022/05/pfSense-WAN-DHCP6.png" class="kg-image" alt="Home Networking Hardware Update" loading="lazy"></figure><p>Then for each internal network interface, set <strong>IPv6 Configuration Type </strong>to <strong>Track Interface</strong>. Under the <strong>Track IPv6 Interface </strong>section, select <strong>WAN</strong> as the interface to track. And set a unique <strong>Prefix ID</strong> for each interface. Finally go to <strong>Services &gt; DHCPv6 Server &amp; RA</strong>, under <strong>Router Advertisements</strong> for each interface, make sure <strong>Router Mode </strong>is set to <strong>Assisted</strong>.</p><p>One thing to be aware of when using IPv6 is to make sure your firewall rules are taking the IPv6 protocol into account. Since you no longer have NAT to block direct access to local hosts, this becomes very important.</p><h1 id="hypervisor-configuration">Hypervisor Configuration</h1><p>Instead of running pfSense directly on my Lenovo M920x Tiny, I opted to virtualize it using Proxmox as a hypervisor. The configuration is very similar to what I had prior in my last post. But with a few important modifications.</p><figure class="kg-card kg-image-card"><img src="https://blog.jenningsga.com/content/images/2022/05/Proxmox-pfSense-VM-Hardware-1.png" class="kg-image" alt="Home Networking Hardware Update" loading="lazy"></figure><p>The first change I made is instead of virtualizing the network ports, I am now passing the PCI-E device directly through to pfSense. This allows a bit better performance as well as the ability to enable hardware checksum offloading. Since I am also using another virtual network in Proxmox to connect other VMs running along side pfSense, the virtual port on the pfSense VM to that Linux bridge must be of type <strong>Intel E1000 </strong>and not <strong>VirtIO.</strong> I have found that hardware checksum offloading does not work well with the later.</p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.jenningsga.com/content/images/2022/05/Proxmox-Network-Interfaces-1.png" class="kg-image" alt="Home Networking Hardware Update" loading="lazy"></figure><p>Here we can see the Linux bridge, <code>vmbr1</code>. This is used as a virtual network switch for other VMs on the hypervisor to communicate on. For example, my virtual machine for running Unifi Controller and Pi-Hole software, are connected through this virtual bridge.</p><p>The Lenovo M920x Tiny has an integrated Ethernet port, labelled <code>eno1</code> here. This is used for management access to the hypervisor. If I ever am unable to reach pfSense or Proxmox through the root switch, I am able to connect a cable directly to this port. Then I have access to the Proxmox UI, where I have a Kali Linux VM available to start to use for troubleshooting.</p><p>Finally, instead of running LXC containers through Proxmox, I am running all services within virtual machines. I used to have privileged LXC containers running Docker engine and Pi-Hole and Unifi Controller Docker nested containers. But had issues upgrading Proxmox as it is not technically not a supported configuration through the hypervisor. I have found that VMs are a bit easier to self-contain and transfer, if need be, anyway.</p><h1 id="routing-and-security">Routing and Security</h1><p>DHCP is configured in pfSense such that hostnames are registered to the DNS forwarder. This allows me to use DNS to access and reference all the internal hosts. For each network, I allow DHCP to assign from half of the set of IPs of the subnet mask. The other half of IPs of the subnet can be used for static IP mappings.</p><figure class="kg-card kg-image-card"><img src="https://blog.jenningsga.com/content/images/2022/05/pfSense-DHCP-Static-Mappings.png" class="kg-image" alt="Home Networking Hardware Update" loading="lazy"></figure><p>One pfSense package that is very nice to have is arpwatch. With this service, you can monitor your system for hosts communicating on your networks. It will detect new MAC addresses discovered through ARP messages and log the timestamp, hostname, MAC, and IP addresses of when that host was discovered. You can also have it email you if a new host is discovered.</p><p>For security on my Netgear MS510TXUP root router, I have DHCP Snooping enabled on the ports and VLANs. With this, I can make sure that only pfSense is handling DHCP requests. Any packets for other rouge DHCP services would be dropped.</p><p>The Netgear switch has a few other services which can be used to secure your networks. Such as port security and Dynamic ARP Inspection. This can be used to protect against MAC and IP address spoofing and ARP based attacks.</p><p>There are also a few services which can be enabled, such as Spanning Tree Protocol (it supports MSTP and RSTP) as well as IGMP snooping and Multicast VLAN Registration (MVR). These services can help optimize broadcast and multicast traffic forwarding through your internal networks.</p><h1 id="conclusion">Conclusion</h1><p>I am becoming satisfied with the availability of new products supporting upwards of 10G. These products are becoming reasonably affordable for home users. Running pure 10G networks can be very expensive. Not only does cabling have to be upgraded, but the hardware and power required to run at 10G speeds can be prohibitive for most enthusiasts. These multi-gig (2.5G and 5G) solutions provide a nice in-between for getting past the gigabit wall we all have been experiencing for some time now.</p><p>I hope this post has been informative. I believe that it is very important to understand these networking fundamentals. Especially for professionals in the IT industry. Being able to troubleshoot networking issues and understand infrastructure is an awfully important skill to have. And that is reason alone to over engineer your own home networking :-) Happy learning!</p><h1 id="products">Products</h1><!--kg-card-begin: html--><table>
<thead>
<tr>
<th>Product</th>
<th>Link</th>
</tr>
</thead>
<tbody>
<tr>
<td>NavePoint 9U network rack enclosure</td>
<td><a href="https://ebay.us/8xdONP?ref=blog.jenningsga.com">https://ebay.us/8xdONP</a></td>
</tr>
<tr>
<td>Patch Panel 24 Port Cat6A with Inline Keystone</td>
<td><a href="https://ebay.us/KIuCTe?ref=blog.jenningsga.com">https://ebay.us/KIuCTe</a></td>
</tr>
<tr>
<td>Netgear 10-Port 10G / Multi-Gigabit (MS510TXUP)</td>
<td><a href="https://ebay.us/oGuCSz?ref=blog.jenningsga.com">https://ebay.us/oGuCSz</a></td>
</tr>
<tr>
<td>Lenovo ThinkCentre M920x Tiny</td>
<td><a href="https://ebay.us/Zs9p3K?ref=blog.jenningsga.com">https://ebay.us/Zs9p3K</a></td>
</tr>
<tr>
<td>Synology DS1821+ 8 Bay NAS</td>
<td><a href="https://ebay.us/QFcrFZ?ref=blog.jenningsga.com">https://ebay.us/QFcrFZ</a></td>
</tr>
<tr>
<td>CyberPower OR500LCDRM1U 500VA/300W UPS</td>
<td><a href="https://ebay.us/hHVuS5?ref=blog.jenningsga.com">https://ebay.us/hHVuS5</a></td>
</tr>
<tr>
<td>AC Infinity S7-P Dual 120mm fans</td>
<td><a href="https://ebay.us/e1kCw8?ref=blog.jenningsga.com">https://ebay.us/e1kCw8</a></td>
</tr>
<tr>
<td>SK hynix Gold P31 1TB PCIe NVMe SSD</td>
<td><a href="https://ebay.us/MhJzvk?ref=blog.jenningsga.com">https://ebay.us/MhJzvk</a></td>
</tr>
<tr>
<td>SuperMicro AOC-STGN-I2S Dual Port 10GB SFP+ NIC</td>
<td><a href="https://ebay.us/K88gKO?ref=blog.jenningsga.com">https://ebay.us/K88gKO</a></td>
</tr>
<tr>
<td>01AJ940 PCI-E x16 riser card</td>
<td><a href="https://ebay.us/8FqsUd?ref=blog.jenningsga.com">https://ebay.us/8FqsUd</a></td>
</tr>
<tr>
<td>Ubiquiti UniFi Switch 8-Port 150W (US-8-150W)</td>
<td><a href="https://ebay.us/hmUTWM?ref=blog.jenningsga.com">https://ebay.us/hmUTWM</a></td>
</tr>
<tr>
<td>Qotom Q355G4</td>
<td><a href="https://ebay.us/wfvSiE?ref=blog.jenningsga.com">https://ebay.us/wfvSiE</a></td>
</tr>
</tbody>
</table><!--kg-card-end: html--><p>The following are the products used in this post.</p><p>When you click on links to various merchants on this site and make a purchase, this can result in this site earning a commission. Affiliate programs and affiliations include, but are not limited to, the eBay Partner Network.</p>]]></content:encoded></item><item><title><![CDATA[The Status of Storage Within Linux]]></title><description><![CDATA[We evaluate LVM, Btrfs, and ZFS from the perspective of a desktop user. And we look at the pros and cons of the different storage technologies.]]></description><link>https://blog.jenningsga.com/status-of-storage-within-linux/</link><guid isPermaLink="false">632052e22a16870001d0a0de</guid><category><![CDATA[Storage]]></category><category><![CDATA[LVM]]></category><category><![CDATA[BTRFS]]></category><category><![CDATA[ZFS]]></category><category><![CDATA[Linux]]></category><category><![CDATA[SSD]]></category><category><![CDATA[HDD]]></category><category><![CDATA[NVME]]></category><dc:creator><![CDATA[Patrick Jennings]]></dc:creator><pubDate>Thu, 01 Apr 2021 02:57:55 GMT</pubDate><media:content url="https://blog.jenningsga.com/content/images/2021/03/ibm-storage.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.jenningsga.com/content/images/2021/03/ibm-storage.jpg" alt="The Status of Storage Within Linux"><p>When installing any Linux distribution, an often overlooked and misunderstood component of the configuration process is setting up the storage of the system.</p><p>In the early days of Linux, there were a few options to consider. Such as whether to format your partitions as EXT3, XFS, or if very adventurous, a variant of the ReiserFS linage. If one wanted some form of software RAID, the only option to consider was Linux mdadm. And setting up system partitions, such as a boot partition, was straight forward since things like UEFI and the <a href="https://en.wikipedia.org/wiki/EFI_system_partition?ref=blog.jenningsga.com">EFI system partition</a> were not standard.</p><p>Today, we have a lot more options to consider. With the development of more advanced and all-encompassing filesystems such as LVM, Btrfs, and ZFS, we are able create more purpose built storage architectures for our specific use cases. As an example, the physical storage composition of our systems can vary from containing drives dedicated for capacity and some for speed. Understanding these storage technologies can be important for increasing the performance of our systems as a whole. In addition, we can use these filesystems to protect ourselves from losing data due to drive loss, a failed system update, or even user error.</p><p>In this post, we are going to look at the current state of storage within Linux. We will evaluate LVM, Btrfs, and ZFS from the perspective of a desktop user. And we will look at the pros and cons of the different technologies and how we can use them to our advantage.</p><h1 id="logical-volume-manager-lvm-">Logical Volume Manager (LVM)</h1><figure class="kg-card kg-image-card"><img src="https://blog.jenningsga.com/content/images/2021/03/LVM-Architecture-2.svg" class="kg-image" alt="The Status of Storage Within Linux" loading="lazy"></figure><p><a href="https://en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux)?ref=blog.jenningsga.com">LVM</a> offers an additional management layer in between the physical drives and the filesystems initialized on the system.</p><p>There are a few key concepts within LVM to first understand: physical volumes, volume groups, and logical volumes. A physical volume represents any storage device that is initialized within the context of the LVM subsystem. Volume groups are made up of physical volumes, and represent the total combined capacity of a particular LVM subsystem. Finally, logical volumes utilize the storage pools defined by the volume groups. Logical volumes can be initialized with a particular RAID, mirroring, or striping configuration for data redundancy and performance.</p><p>By default, logical volumes will reserve from the volume group all of the space specified during creation. We can use <a href="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/thinly_provisioned_volume_creation?ref=blog.jenningsga.com">thin pools</a>, which are special types of logical volumes, to dynamically grow and reserve space as the system needs. This allows us to over allocate space within a Linux storage system.</p><h2 id="advantages-of-lvm">Advantages of LVM</h2><p>LVM can provide significant advantages compared to using only a traditional filesystem.</p><p>One of the first advantages is that LVM provides a way to dynamically add or remove storage from a system. To do this, we extend the volume group to add a new physical volume. Afterwards, the logical volume can be resized, which is analogous to resizing of a partition. And finally, the filesystem can be resized to gain the extra storage at that mount point. This can be a very convenient method of expanding storage, for example, on a virtual machine that has run out of space.</p><p>Another important feature is that of snapshots. A snapshot can be created on a live system. And it can be referenced at a later point and be written to or reverted to. Snapshots are a great way to prevent errors if created before upgrading system packages, or even to protect from deleting an important file.</p><p>Many systems these days will have both SSDs for faster storage and traditional spinning HDDs for capacity. With LVM, we can leverage this by setting up <a href="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_and_managing_logical_volumes/enabling-caching-to-improve-logical-volume-performance_configuring-and-managing-logical-volumes?ref=blog.jenningsga.com">caching on logical volumes</a> such that read and write performance is increased while also allowing for full capacity of the data drives. It is important to note that some configurations of a writeback cache will require that the underlying storage is reliable; such as those in a redundant RAID configurations and on backup power. Because a loss of such a writeback cache drive could cause filesystem data loss.</p><p>LVM has been apart of the Linux ecosystem for a long time. Due to its maturity, LVM is found in many popular distributions by default. As a product of Red Hat, LVM plays a large role in that of CentOS as well as Red Hat Enterprise Linux (RHEL). LVM can be used in combination with other technologies, such as Red Hat&apos;s <a href="https://www.redhat.com/en/blog/look-vdo-new-linux-compression-layer?ref=blog.jenningsga.com">VDO</a>, to offer things like compression and deduplication. Finally, with LVM, you still may use any of the traditional filesystems such as XFS and EXT4, which are familiar by the majority of users in the Linux ecosystem.</p><h2 id="disadvantages-of-lvm">Disadvantages of LVM</h2><p>One disadvantage of LVM, which one should be aware of, is the issue of bit rot. Bit rot is due to <a href="https://en.wikipedia.org/wiki/Data_degradation?ref=blog.jenningsga.com">data degradation</a> at the physical drive level. LVM RAID utilizes the traditional mdadm software RAID which comes with caveats for finding and repairing such data degradation. According to the <code>lvmraid</code> <a href="http://manpages.ubuntu.com/manpages/bionic/man7/lvmraid.7.html?ref=blog.jenningsga.com">manpage</a>:</p><!--kg-card-begin: markdown--><blockquote>
<p>The repair mode can make the RAID LV data consistent, but it does not know which data is correct. The result may be consistent but incorrect data. When two different blocks of data must be made consistent, it chooses the block from the device that would be used during RAID intialization. However, if the PV holding corrupt data is known, lvchange --rebuild can be used in place of scrubbing to reconstruct the data on the bad device.</p>
</blockquote>
<!--kg-card-end: markdown--><p>To summarize, with the LVM stack and a RAID configuration, we are able to detect inconsistencies in a replicated block of data. Yet the LVM utilities are only able to make a best guess as to which of the blocks is correct to make the system consistent. If the user knows which of the blocks is good, they may manually rebuild the block. We will see in the next sections how Btrfs and ZFS solve this issue using additional filesystem metadata.</p><p>The final disadvantages are due to the way snapshots work in LVM. Like logical volumes, a snapshot is exposed as a block device. Transferring a LVM snapshot for backup purposes, in an incremental and efficient way, can be difficult because of this fact. Also, there are several reports of having significant snapshots on a system causing performance degradation. Though I have not found any reputable investigations of this issue, so your results may vary. Maybe this is something we can look into in a future post ;-)</p><h2 id="summary-of-lvm">Summary of LVM</h2><!--kg-card-begin: markdown--><table>
<thead>
<tr>
<th>Pros</th>
<th>Cons</th>
</tr>
</thead>
<tbody>
<tr>
<td>&#x2705; Live Snapshotting of Partitions</td>
<td>&#x274C; No Corruption Detection</td>
</tr>
<tr>
<td>&#x2705; Read and Write Caching to Faster Storage</td>
<td></td>
</tr>
<tr>
<td>&#x2705; Stability of Features</td>
<td></td>
</tr>
</tbody>
</table>
<!--kg-card-end: markdown--><p>As we have seen, LVM is a great tool for expanding the functionalities of traditional filesystems. It represents an old-school modular way of configuring a storage stack within Linux. Where features are added by creating additional logical abstractions of the storage in-between the physical disk and the filesystems.</p><p>LVM is a great option for use cases where bit rot is not a high priority issue. Or if the underlying storage is highly reliable, as the case may be for public cloud environments, such as AWS or Google Cloud.</p><h1 id="btrfs">Btrfs</h1><p><a href="https://en.wikipedia.org/wiki/Btrfs?ref=blog.jenningsga.com">Btrfs</a> is a newer filesystem developed specifically for the Linux ecosystem. It offers many of the same benefits of LVM, but the features are included in the filesystem itself.</p><figure class="kg-card kg-image-card"><img src="https://blog.jenningsga.com/content/images/2021/03/Btrfs-Architecture-1.svg" class="kg-image" alt="The Status of Storage Within Linux" loading="lazy"></figure><p>A volume is defined during initialization of the filesystem using <code>mkfs.btrfs</code>. We pass in any number of block devices, as well as define the data replication strategies of this volume. Different strategies can be defined for the filesystem metadata and the actual file data of a particular Btrfs volume.</p><!--kg-card-begin: markdown--><table>
<thead>
<tr>
<th>Profile</th>
<th>Copies</th>
<th>Parity</th>
<th>Striping</th>
<th>Space utilization</th>
<th>Min/max devices</th>
</tr>
</thead>
<tbody>
<tr>
<td>single</td>
<td>1</td>
<td></td>
<td></td>
<td>100%</td>
<td>1/any</td>
</tr>
<tr>
<td>DUP</td>
<td>2 / 1 device</td>
<td></td>
<td></td>
<td>50%</td>
<td>1/any</td>
</tr>
<tr>
<td>RAID0</td>
<td></td>
<td></td>
<td>1 to N</td>
<td>100%</td>
<td>2/any</td>
</tr>
<tr>
<td>RAID1</td>
<td>2</td>
<td></td>
<td></td>
<td>50%</td>
<td>2/any</td>
</tr>
<tr>
<td>RAID1C3</td>
<td>3</td>
<td></td>
<td></td>
<td>33%</td>
<td>3/any</td>
</tr>
<tr>
<td>RAID1C4</td>
<td>4</td>
<td></td>
<td></td>
<td>25%</td>
<td>4/any</td>
</tr>
<tr>
<td>RAID10</td>
<td>2</td>
<td></td>
<td>1 to N</td>
<td>50%</td>
<td>4/any</td>
</tr>
<tr>
<td>RAID5</td>
<td>1</td>
<td>1</td>
<td>2 to N-1</td>
<td>(N-1)/N</td>
<td>2/any</td>
</tr>
<tr>
<td>RAID6</td>
<td>1</td>
<td>2</td>
<td>3 to N-2</td>
<td>(N-2)/N</td>
<td>3/any</td>
</tr>
</tbody>
</table>
<!--kg-card-end: markdown--><p>The way Btrfs handles mirrored <a href="https://btrfs.wiki.kernel.org/index.php/Manpage/mkfs.btrfs?ref=blog.jenningsga.com#PROFILES">RAID profiles</a> is different than is traditionally defined. Btrfs works on the concept of data copies for mirrored RAID profiles. For example, if a RAID1 volume is created with three equally sized disks, each data block will have one associated copy, and the volume will be able to handle a single disk failure. This is different than a traditional RAID1 configuration in which all three disks would hold the same copy of the data, and the system would be able to handle two disk failures.</p><p>After initialization of the volume, a default top-level subvolume is created. A subvolume is considered a namespaced portion of the volume and can have it&apos;s own snapshots or subvolumes anywhere within its own mounted directory structure. As a general rule, a subvolume is created to isolate the data within it, and can have separate mount options and snapshots.</p><p>A snapshot is a special type of subvolume. A snapshot is created by targeting an existing subvolume. Afterwards, a <a href="https://en.wikipedia.org/wiki/Copy-on-write?ref=blog.jenningsga.com">Copy-on-write</a> pointer to the subvolume is created as it exists at that moment in time. This uses no extra storage and is a very quick operation. The two subvolumes will appear to contain the same data, but the files within can be updated or deleted without impacting the other. We can then revert to a particular snapshot by mounting it&apos;s subvolume and removing the original.</p><h2 id="advantages-of-btrfs">Advantages of Btrfs</h2><p>Unlike LVM, Btrfs generates checksums for each data and metadata block of the filesystem. With this checksum, Btrfs is able to detect silent corruption from bit rot, and fix it if a valid block has been replicated to another storage device. This is done automatically on all reads of data as a system is being used. And should be run periodically with the <a href="https://btrfs.wiki.kernel.org/index.php/Manpage/btrfs-scrub?ref=blog.jenningsga.com">scrub</a> command, to verify data that is rarely read from the filesystem.</p><p>Btrfs uses a <a href="https://btrfs.wiki.kernel.org/index.php/SysadminGuide?ref=blog.jenningsga.com#RAID_and_data_replication">replication strategy</a> in which chunks are balanced across any number and size of devices in a volume. With traditional RAID, it&apos;s recommended to keep device sizes the same. With Btrfs, as long as the amount of devices with free space satisfies the data replication strategy of the volume, Btrfs will handle balancing the data across all of the storage devices. Since Btrfs is flexible in this way, converting to a different RAID level, and adding or removing devices, can be done very easily by using the <code>btrfs balance</code> command.</p><p>A strong advantage of Btrfs is its inclusion of mount options which are available for each subvolume. Btrfs offers mount options which can enable ssd optimizations, discard support, as well as options for disabling checksums for an entire mount point. Another popular option is that of compression. Zlib compress is enabled by default, and the filesystem will use heuristics to determine whether a file should be compressed or not and mark it as such. Similar to converting RAID levels, we can use <code>btrfs balance</code> to re-write our data using a different compression algorithm.</p><p>The final advantage Btrfs brings to the table is the ability for <a href="https://btrfs.wiki.kernel.org/index.php/Conversion_from_Ext3/4_and_ReiserFS?ref=blog.jenningsga.com">conversion</a> of Ext3, Ext4, and ReiserFS filesystems to Btrfs. Running the <code>btrfs-convert</code> utility will create a Btrfs subvolume with the existing filesystem data on it and a default subvolume will be snapshotted from it. This operation can be completely reverted back to the original filesystem, if needed.</p><h2 id="disadvantages-of-btrfs">Disadvantages of Btrfs</h2><p>Btrfs has a few notable gotchas which a user will want to know about.</p><p>The first disadvantage is the stability of the parity RAID5 and RAID6 implementations. Utilizing these causes a system to be vulnerable to drive and power failures. In which the filesystem may become corrupted beyond repair. This is known as the &quot;write hole&quot;.</p><!--kg-card-begin: markdown--><blockquote>
<p>Parity may be inconsistent after a crash (the &quot;write hole&quot;). The problem born when after &quot;an unclean shutdown&quot; a disk failure happens. But these are <em>two</em> distinct failures. These together break the BTRFS raid5 redundancy. If you run a scrub process after &quot;an unclean shutdown&quot; (with no disk failure in between) those data which match their checksum can still be read out while the mismatched data are lost forever.</p>
</blockquote>
<!--kg-card-end: markdown--><p>According to <a href="https://btrfs.wiki.kernel.org/index.php/RAID56?ref=blog.jenningsga.com"><a href="https://btrfs.wiki.kernel.org/index.php/RAID56?ref=blog.jenningsga.com">t</a>he Btrfs wiki</a> quoted above, a &quot;write hole&quot; event may happen with RAID5/6 in which an improper shutdown could cause a mismatch between the parity and data to happen. The <a href="https://lore.kernel.org/linux-btrfs/cover.1559917235.git.dsterba@suse.com/?ref=blog.jenningsga.com">general advice</a> from developers, if an individual wants to utilize a parity RAID implementation, is to utilize a RAID1 type profile, for example RAID1C3, for the metadata. And then RAID5 or RAID6 can be used for the file data. This way, in case of a write hole event, the filesystem metadata is protected while the actual filesystem contents may be scrubbed immediately on boot in order to make the filesystem consistent again.</p><p>Another gotcha concerning RAID that could catch a user off-guard, is the fact that a RAID1 volume may <a href="https://btrfs.wiki.kernel.org/index.php/Gotchas?ref=blog.jenningsga.com#raid1_volumes_only_mountable_once_RW_if_degraded">only be mounted once</a> as read-writable in a degraded state. For example, if a two disk system experiences a single drive failure, the system may only be mounted as degraded once until the drive is replaced. Obviously, it is always advised to replace your as soon as possible. But this is an important limitation to know if you are expecting your systems to be functional before redundancy is fixed.</p><p>Finally, Btrfs does not currently support any type of caching to faster storage. A user is free to mix storage types in a single volume, but the filesystem will not do any type of prioritization of the data onto the faster drives.</p><h2 id="summary-of-btrfs">Summary of Btrfs</h2><!--kg-card-begin: markdown--><table>
<thead>
<tr>
<th>Pros</th>
<th>Cons</th>
</tr>
</thead>
<tbody>
<tr>
<td>&#x2705; Copy on Write Snapshots</td>
<td>&#x274C; Parity RAID Vulnerabilities</td>
</tr>
<tr>
<td>&#x2705; Corruption Protection</td>
<td>&#x274C; Caching Not Natively Supported</td>
</tr>
<tr>
<td>&#x2705; Conversion between RAID levels and EXT4</td>
<td></td>
</tr>
</tbody>
</table>
<!--kg-card-end: markdown--><p>As we have seen, Btrfs is a very exciting filesystem which provides many advanced functionalities to the Linux ecosystem. It combines a lot of wanted properties that exists in storage manages, like LVM, into a flexible filesystem solution.</p><p>Btrfs offers a convenient method of creating on-demand snapshots of a system. It also fixes the issue of bitrot, by establishing checksums, as a means of determining whether a block has been corrupted. And it uses redundancy profiles in order to fix any corrupted data. Btrfs has flexibility of how it can represent data on disk and offers convenient methods of converting from one redundancy profile to another. As well as converting from existing filesystems, such as EXT4 or ReiserFS.</p><p>As Btrfs is developed, it will continue to grow and improve on the stability of its features. Already, Fedora <a href="https://fedoramagazine.org/btrfs-coming-to-fedora-33/?ref=blog.jenningsga.com">has changed </a>the default filesystem of it&apos;s distribution to Btrfs in Fedora 33. So this is an exciting storage technology which will only increase in popularity in the desktop space in the upcoming years.</p><h1 id="zfs">ZFS</h1><p>ZFS is an open-source storage technology that came out of Solaris in the early 2000s. In 2005, under new control of Oracle, it was transferred to a closed-source model. At this time, various forks and ports of the ZFS specification came to fruition. The development of these ZFS implementations are now under the umbrella of <a href="https://en.wikipedia.org/wiki/OpenZFS?ref=blog.jenningsga.com">OpenZFS</a>.</p><figure class="kg-card kg-image-card"><img src="https://blog.jenningsga.com/content/images/2021/03/ZFS-Architecture-1.svg" class="kg-image" alt="The Status of Storage Within Linux" loading="lazy"></figure><p>ZFS is a complex filesystem with many different concepts which are important to understand before setup and deployment.</p><p>The first concept is that of vdevs. In ZFS, vdevs encompass a set of disks and can have a particular redundancy profile associated. A vdev supports mirroring, single parity (RAIDZ1), double parity (RAIDZ2), and triple parity (RAIDZ3) block redundancies.</p><p>Another important concept is that of a zpool. A zpool has one or more vdevs underneath it to provide the underlying storage. A zpool does not have any particular redundancy associated; that is the responsibility of only the vdevs. A zpool will distribute writes across the vdevs, mostly according to the free space available to each at the time. But it is important to understand that there is no guarantee of how the data will be written and should not be confused as true data stripping. A zpool is flexible in that vdevs under it can be dynamically added to the pool and can be of varying sizes and use any redundancy technique.</p><p>A third concept is that of datasets. A dataset is very similar to the concept of a subvolume within Btrfs. It can be used to subdivide the data within the file hierarchy so that a snapshot of the dataset will only contain what is within that dataset. A dataset can also be used to define different filesystem properties, such as compression, encryption, and even deduplication. As an example, here is one the datasets defined for one my desktop:</p><pre><code class="language-bash">[patrick@summit ~]$ zfs list -o name,mountpoint
NAME                                   MOUNTPOINT
bpool                                  /boot
bpool/sys                              /boot  
bpool/sys/BOOT                         none
bpool/sys/BOOT/default                 legacy
rpool                                  /
rpool/sys                              /
rpool/sys/DATA                         none
rpool/sys/DATA/default                 /
rpool/sys/DATA/default/home            /home
rpool/sys/DATA/default/root            /root
rpool/sys/DATA/default/srv             /srv
rpool/sys/DATA/default/usr             /usr
rpool/sys/DATA/default/usr/local       /usr/local
rpool/sys/DATA/default/var             /var
rpool/sys/DATA/default/var/lib         /var/lib
rpool/sys/DATA/default/var/lib/docker  /var/lib/docker
rpool/sys/DATA/default/var/log         /var/log
rpool/sys/DATA/default/var/spool       /var/spool
rpool/sys/DATA/default/var/tmp         /var/tmp
rpool/sys/ROOT                         none   
rpool/sys/ROOT/default                 /</code></pre><p>The final concept is that of zvols. A zvol exists at the same level as a dataset and is used to expose a raw block device to the system. This can be helpful if you want to have, for example, a separate swap partition on top of ZFS.</p><h2 id="advantages-of-zfs">Advantages of ZFS</h2><p>ZFS is quite an advanced filesystem and has support for more features than just about any filesystem to date. Like Btrfs, ZFS is based on the concept of Copy-on-write. The greatest benefit of this is that it allows for instantaneous live snapshots to be created from any dataset. Unlike Btrfs, ZFS takes the concept a step further and applies CoW to each data write operation.</p><!--kg-card-begin: markdown--><blockquote>
<p>Copy-on-write in ZFS isn&apos;t only at the filesystem level, it&apos;s also at the disk management level. This means that the RAID hole&#x2014;a condition in which a stripe is only partially written before the system crashes, making the array inconsistent and corrupt after a restart&#x2014;doesn&apos;t affect ZFS. Stripe writes are atomic, the vdev is always consistent, and Bob&apos;s your uncle.</p>
</blockquote>
<!--kg-card-end: markdown--><p>According to an article by Ars Technica, entitled <a href="https://arstechnica.com/information-technology/2020/05/zfs-101-understanding-zfs-storage-and-performance/?ref=blog.jenningsga.com">ZFS 101&#x2014;Understanding ZFS storage and performance</a>, due to the way that ZFS commits data to disk using Copy-on-write mechanisms, the filesystem is not affected by the write hole problem unlike Btrfs. In fact, the parity implementations in ZFS, denoted as RAIDZ, are highly lauded by the community and are considered very stable.</p><p>An additional advantage ZFS brings to a system is advanced caching hierarchy for both read and write operations. Traditionally, filesystems let the kernel page cache handle caching of recently used blocks and file metadata. ZFS takes another route and utilizes in-memory caches using more advanced algorithms. This read cache is considered the Adaptive Replacement Cache (ARC) and is the main reason that ZFS is known to like a lot of RAM. When an eviction of the cache occurs, one can configure a high performance L2ARC device, which extends this read cache to hold more frequently read data.</p><p>For speeding up synchronous write operations, ZFS utilizes a special device known as the Secondary Log (SLOG) device. A user may register a high endurance and write optimized persistent storage device to handle the ZFS Intent Log (ZIL) on the SLOG device. This ZIL is simply a journal of write transactions to the ZFS pool. The ZIL may be referenced on crash of a system, and is used to keep the filesystem consistent and the data within it reliable. It is important to note this cache is only for synchronous write operations, which are used heavily within databases and virtual machine virtual disks.</p><p>Finally, ZFS offers many different features which may optionally be enabled on a per dataset definition. Compression, deduplication, and encryption can all be enabled on each dataset. Likewise to Btrfs, ZFS checksums all filesystem data in order to protect against bit rot.</p><h2 id="disadvantages-of-zfs">Disadvantages of ZFS</h2><p>ZFS is a without a doubt a tremendous filesystem with many features and a stable implementation. That being said, there are a few disadvantages which a user should be aware of.</p><p>The first is that ZFS is under the open-source <a href="https://en.wikipedia.org/wiki/Common_Development_and_Distribution_License?ref=blog.jenningsga.com">CDDL</a> license. Unfortunately the CDDL license is considered incompatible with the <a href="https://en.wikipedia.org/wiki/GNU_General_Public_License?ref=blog.jenningsga.com">GPL</a> license. This poses problems because ZFS is not legally allowed to be included with the Linux kernel. Instead, it&apos;s advised that a user should use the ZFS Dynamic Kernel Module (DKMS) packages to load the required kernel code on boot. This present the following problems:</p><ul><li>The ZFS kernel modules must be included in the initramfs.</li><li>The ZFS kernel modules must be independently rebuilt for each new kernel release.</li></ul><p>The first issue only presents itself as a problem if the user compiles their own kernels. In which case, they need to always remember to include the ZFS DKMS packages with their custom kernels, especially if booting from or have root mounted as ZFS. The second is a problem that many users of rolling distributions, <a href="https://news.ycombinator.com/item?id=13937801&amp;ref=blog.jenningsga.com">for example Arch Linux</a>, can face. If a new Linux kernel is released, there may be a period of time when a system cannot be updated due to conflicts. This is due to the independently maintained ZFS DKMS packages lagging behind. It&apos;s an unfortunate reality due to a simple license incompatibility.</p><p>The final disadvantage has to do with expanding the storage of a zpool for RAIDZ configurations. ZFS is not as flexible on adding and removing storage as Btrfs or LVM. As detailed in the article entitled &quot;<a href="https://louwrentius.com/the-hidden-cost-of-using-zfs-for-your-home-nas.html?ref=blog.jenningsga.com" rel="bookmark">The &apos;Hidden&apos; Cost of Using ZFS for Your Home NAS</a>&quot;, a vdev cannot be easily changed after initialization of a parity RAID configuration. This means that a disk cannot be added later down the road when you are ready to expand storage. One cannot simply add a disk and rebalance like Btrfs. You will need to create an additional vdev, ideally of the same RAIDZ profile type, and add that vdev to the zpool. One other option, is replacing each drive, one at a time, with a larger drive, and expanding a vdev that way. But this is wholly inefficient in time as a re-silvering is required for each drive replace.</p><h2 id="summary-of-zfs">Summary of ZFS</h2><!--kg-card-begin: markdown--><table>
<thead>
<tr>
<th>Pros</th>
<th>Cons</th>
</tr>
</thead>
<tbody>
<tr>
<td>&#x2705; Copy on Write Snapshots</td>
<td>&#x274C; License Incompatibilities with GPL</td>
</tr>
<tr>
<td>&#x2705; Corruption Protection</td>
<td>&#x274C; Least Flexible Storage Expansion</td>
</tr>
<tr>
<td>&#x2705; Read and Write Caching</td>
<td></td>
</tr>
<tr>
<td>&#x2705; Stability of Features</td>
<td></td>
</tr>
</tbody>
</table>
<!--kg-card-end: markdown--><p>ZFS is an amazing swiss-army knife for storage. It is able to handle all storage concerns well and offers many features that one would want in a filesystem. Its feature sets were the basis of comparison to newer filesystems like Btrfs. And will continue to be used and have a strong following by storage enthusiasts.</p><p>ZFS makes a great use case for NAS devices where data integrity is critical. And excels with large storage pools with caching devices used for boosting read and write performance. It is also used in high regard as local persistent storage for a cloud platforms such as used by Proxmox. Where it is able to handle on-demand snapshots and offer high performance storage configurations.</p><h1 id="conclusion">Conclusion</h1><p>It&apos;s certainly an exciting time to be around with such accessible storage technologies to the end user. With projects like LVM, Btrfs, and ZFS in use and development, it is important to understand these technologies to see what their use cases are in relation to the others. And more importantly, to learn from them the concepts that they bring to the Linux ecosystem. And how we can set up our own systems for better reliability, performance, and usability.</p><p>Thanks for reading. I really appreciate feedback on what you think about these conclusions, and what your own use cases, both at work and in your own home lab, have been!</p>]]></content:encoded></item><item><title><![CDATA[Remote VPN Server with pfSense and a Dynamic IP Address]]></title><description><![CDATA[How to configure a pfSense router for remote access using OpenVPN. These instructions will target residents who have a dynamic IP address.]]></description><link>https://blog.jenningsga.com/remote-vpn-server-pfsense/</link><guid isPermaLink="false">632052e22a16870001d0a0dd</guid><category><![CDATA[pfsense]]></category><category><![CDATA[OpenVPN]]></category><category><![CDATA[Networking]]></category><dc:creator><![CDATA[Patrick Jennings]]></dc:creator><pubDate>Mon, 02 Nov 2020 03:54:05 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1555998322-9e293b1342da?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1555998322-9e293b1342da?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Remote VPN Server with pfSense and a Dynamic IP Address"><p>Our goal in this post is to setup a secure home VPN server which we can use to connect our phone or laptops remotely. This will allow us access to all of our home networked resources such as printers and security cameras. We will look at how to configure a pfSense router for remote access using OpenVPN. These instructions will target residents who have a dynamic IP address which may change without notice.</p><figure class="kg-card kg-image-card"><img src="https://blog.jenningsga.com/content/images/2020/11/vpn.svg" class="kg-image" alt="Remote VPN Server with pfSense and a Dynamic IP Address" loading="lazy"></figure><p>In order to properly configure a VPN server, we will need to establish a DNS record which will track our IP address. Luckily, there are free services we can use to register with our pfSense router in order to do exactly this.</p><h1 id="dynamic-dns">Dynamic DNS</h1><p>The <a href="https://docs.netgate.com/pfsense/en/latest/services/dyndns/index.html?ref=blog.jenningsga.com">Dynamic DNS</a> (DDNS) pfSense service allows us to create a configuration such that when our IP address changes, our DNS entry will be updated accordingly.</p><p>There are many DDNS services which allow you to register a free subdomain to track your IP address. For my setup, I use <a href="https://freemyip.com/help?ref=blog.jenningsga.com">freemyip.com</a> which allows registration of a DNS entry, and a convenient method for updating it when it changes.</p><p>After you register your unique subdomain, you will be given an update URL with which to call when your IP address changes. We can do this automatically within pfSense.</p><figure class="kg-card kg-image-card"><img src="https://blog.jenningsga.com/content/images/2020/11/Dynamic-DNS-Add-Client.png" class="kg-image" alt="Remote VPN Server with pfSense and a Dynamic IP Address" loading="lazy"></figure><p>Go to the <strong>Dynamic DNS</strong> service within pfSense and click <strong>Add</strong> to start the process of creating a new DDNS client.</p><figure class="kg-card kg-image-card"><img src="https://blog.jenningsga.com/content/images/2020/11/Dynamic-DNS-Settings.png" class="kg-image" alt="Remote VPN Server with pfSense and a Dynamic IP Address" loading="lazy"></figure><p>The setup is very easy. Simply paste the update URL into the form, and make sure that the <strong>Verify SSL/TLS Certificate Trust</strong> is selected as well.</p><h1 id="openvpn-server">OpenVPN Server</h1><p>After establishing our DDNS client, we can now begin setting up the remote VPN Server configuration.</p><p>The easiest method of doing so is going through the OpenVPN Remote Access Server Wizard. The official Netgate documentation has <a href="https://docs.netgate.com/pfsense/en/latest/recipes/openvpn-ra.html?ref=blog.jenningsga.com">a very comprehensive example</a> which goes through all the options for the wizard. It is recommended to follow this in order to create a Certificate Authority, a corresponding Server Certificate, and to configure the OpenVPN Server itself.</p><p>For my setup and the rest of this guide, I am using local users as the authentication type instead of any remote authentication available.</p><h1 id="creating-openvpn-clients">Creating OpenVPN Clients</h1><p>Once an OpenVPN server configuration is created, we can now create local users which will represent our clients to connect remotely.</p><p>It is recommended to have one user per device. For example, if you have a laptop and phone which you would like to connect, you should create a user for each and not share. This allows us to easily revoke the users client certificate in case the device is lost or compromised.</p><p>You can create a new user by going to <strong>System -&gt; User Manager</strong>. </p><figure class="kg-card kg-image-card"><img src="https://blog.jenningsga.com/content/images/2020/11/Create-User.png" class="kg-image" alt="Remote VPN Server with pfSense and a Dynamic IP Address" loading="lazy"></figure><p>You can go ahead and create a new OpenVPN client certificate by clicking the checkbox and selecting your Certificate authority you referenced in the OpenVPN setup wizard above.</p><h1 id="export-openvpn-configurations">Export OpenVPN Configurations</h1><p>Once we have our users created, we can export any OpenVPN configuration using the <a href="https://docs.netgate.com/pfsense/en/latest/packages/openvpn-client-export.html?ref=blog.jenningsga.com">OpenVPN Client Export Package</a> available through the Package Manager in pfSense.</p><p>Once the package is installed, the export can be accessed by clicking the <strong>Client Export</strong> tab within the OpenVPN service page.</p><figure class="kg-card kg-image-card"><img src="https://blog.jenningsga.com/content/images/2020/11/OpenVPN-Client-Export.png" class="kg-image" alt="Remote VPN Server with pfSense and a Dynamic IP Address" loading="lazy"></figure><p>Make sure to input your DNS entry in the <strong>Host Name</strong> input, after selecting <strong>Other</strong> as the <strong>Host Name Resolution</strong> option. You will need to do this each time you are exporting a new configuration file.</p><p>Scrolling below, you may select a configuration file type to export depending on your needs.</p><figure class="kg-card kg-image-card"><img src="https://blog.jenningsga.com/content/images/2020/11/VPN-Cert-Export.png" class="kg-image" alt="Remote VPN Server with pfSense and a Dynamic IP Address" loading="lazy"></figure><p>If you use <a href="https://wiki.gnome.org/Projects/NetworkManager/?ref=blog.jenningsga.com">NetworkManager</a>, the <a href="https://wiki.archlinux.org/index.php/Networkmanager-openvpn?ref=blog.jenningsga.com">Networkmanager-openvpn</a> plugin works rather well to import a configuration file directly. There exists apps for <a href="https://play.google.com/store/apps/details?id=de.blinkt.openvpn&amp;ref=blog.jenningsga.com">Android</a> and <a href="https://apps.apple.com/us/app/openvpn-connect/id590379981?ref=blog.jenningsga.com">iOS</a> as well.</p>]]></content:encoded></item><item><title><![CDATA[Virtualize pfSense for Google Fiber - A Dream Networking Stack]]></title><description><![CDATA[This is a story of planning and executing on a networking re-design utilizing Google Fiber, pfSense virtualized in Proxmox, and Ubiquiti products.]]></description><link>https://blog.jenningsga.com/virtualize-pfsense-with-google-fiber-a-dream-networking-stack/</link><guid isPermaLink="false">632052e22a16870001d0a0db</guid><category><![CDATA[pfsense]]></category><category><![CDATA[Google Fiber]]></category><category><![CDATA[Proxmox]]></category><category><![CDATA[Unifi]]></category><category><![CDATA[Ubiquiti]]></category><category><![CDATA[Qotom]]></category><category><![CDATA[OVS]]></category><category><![CDATA[Networking]]></category><category><![CDATA[Virtualization]]></category><dc:creator><![CDATA[Patrick Jennings]]></dc:creator><pubDate>Sun, 23 Feb 2020 20:40:34 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1558494949-ef010cbdcc31?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1558494949-ef010cbdcc31?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Virtualize pfSense for Google Fiber - A Dream Networking Stack"><p>I recently moved to my first home and immediately started planning improvements to the networking situation. This is a story of planning and executing on a networking stack re-design utilizing Google Fiber, pfSense virtualized in a Proxmox hypervisor, and some Ubiquiti products to boot.</p><h1 id="fixing-the-wiring-cruft">Fixing the Wiring Cruft</h1><p>Upon moving in, I wanted ethernet runs to the rooms used for my office and lab. Fortunately, the home was wired with Cat5e homeruns to the rooms but they were terminated as RJ11 (phone) jacks. So first things first, I re-terminated the Cat5e as RJ45.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.jenningsga.com/content/images/2020/02/smart-networking-panel-before.jpg" class="kg-image" alt="Virtualize pfSense for Google Fiber - A Dream Networking Stack" loading="lazy"><figcaption>Smart Panel Box Before</figcaption></figure><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.jenningsga.com/content/images/2020/02/smart-networking-panel-after.jpg" class="kg-image" alt="Virtualize pfSense for Google Fiber - A Dream Networking Stack" loading="lazy"><figcaption>Smart Panel Box After</figcaption></figure><p>There was no way I could have relied on WIFI for all my hardware and service needs. So this was an easy first win, in my books!</p><h1 id="google-fiber">Google Fiber</h1><figure class="kg-card kg-image-card"><img src="https://blog.jenningsga.com/content/images/2020/02/20200223_093407.jpg" class="kg-image" alt="Virtualize pfSense for Google Fiber - A Dream Networking Stack" loading="lazy"></figure><p>I have access to the amazing Google Fiber in my area. So this would be the foundation for me to build my networking on. I used the Google Fiber router for a few months but always felt the pull to go back to the more advanced routing of pfSense which I had deployed in the past. It is difficult to not feel like you are missing out with proper <a href="https://docs.netgate.com/pfsense/en/latest/book/vlan/index.html?ref=blog.jenningsga.com">VLAN</a> tagging and management, <a href="https://docs.netgate.com/pfsense/en/latest/firewall/index.html?ref=blog.jenningsga.com">firewall</a> rule definitions, <a href="https://docs.netgate.com/pfsense/en/latest/vpn/index.html?ref=blog.jenningsga.com">VPN</a> integration, and <a href="https://docs.netgate.com/pfsense/en/latest/ids-ips/index.html?ref=blog.jenningsga.com">IDS</a> which pfSense offers in its ecosystem. So with that in mind, I began my R&amp;D.</p><h1 id="hardware-utilized">Hardware Utilized</h1><p>pfSense runs nicely on commodity hardware (although some would argue as long as Intel NICs are used, along with a CPU which supports AES-NI). In the past, I used a <a href="http://www.qotom.net/product/29.html?ref=blog.jenningsga.com">Qotom Q355G4</a> to run pfSense and it worked reasonably well. Outfitted with 8GB of DDR3, 4 x Intel I211 NICs, and an efficient Intel i5-5200U processor, it is able to handle routing traffic along with a surprising amount of layer-2+ services in parallel.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.jenningsga.com/content/images/2020/02/20200222_181025.jpg" class="kg-image" alt="Virtualize pfSense for Google Fiber - A Dream Networking Stack" loading="lazy"><figcaption>Qotom Q355G4</figcaption></figure><p>In addition, I purchased a <a href="https://www.ui.com/unifi-switching/unifi-switch-8-150w/?ref=blog.jenningsga.com">US-8-150w</a> switch and a <a href="https://inwall-hd.ui.com/?ref=blog.jenningsga.com">UAP-IW-HD</a> access point from Ubiquiti to supplement the <a href="https://www.ui.com/unifi/unifi-ap-ac-pro/?ref=blog.jenningsga.com">AP-AC-PRO</a> I had already.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.jenningsga.com/content/images/2020/02/20200222_181104.jpg" class="kg-image" alt="Virtualize pfSense for Google Fiber - A Dream Networking Stack" loading="lazy"><figcaption>Ubiquiti US-8-150W</figcaption></figure><h1 id="virtualizing-pfsense">Virtualizing pfSense</h1><p>The idea of running pfSense under Proxmox was very appealing to me. The benefit being able to manage pfSense in a VM and to be able to offload some of the network interface details to the hypervisor. This would allow pfSense to only concern itself with the routing and application layers. Also, I will be able to easily and, most importantly, quickly backup and restore the VM during upgrades or testing of functionality. Finally, this would allow me to deploy additional services such as <a href="https://pi-hole.net/?ref=blog.jenningsga.com">Pi-Hole</a> and the <a href="https://www.ui.com/software/?ref=blog.jenningsga.com">Unifi Controller</a> on the router itself.</p><p>This is the design I came up with:</p><figure class="kg-card kg-image-card"><img src="https://blog.jenningsga.com/content/images/2020/02/Housing-Google-Fiber-Network.svg" class="kg-image" alt="Virtualize pfSense for Google Fiber - A Dream Networking Stack" loading="lazy"></figure><p>There are some interesting concepts in this design which make it really resilient and performant. The first being the usage of a static management port connecting to Proxmox. This allows an admin to connect directly to the host, using either SSH or the Proxmox Web GUI, for debugging of any VM or hypervisor detail. So if something goes wrong on the network, I can always patch in directly to the router and fix it.</p><p>Next, the usage of Open vSwitch bridges and bonds makes it really simple to create aggregation groups and virtualized switches. This hides a lot of the implementation details from pfSense and thus configuration is much simpler. In this case, I will be using 1 port for WAN which goes to the Google Fiber jack, 2 ports for LACP to the US-8-150W switch for LAN, and finally the last port is reserved for the management access.</p><p>The final benefit of this design is the usage of LACP to the switch. This not only increases the theoretical throughput to the router to 2Gbps but also adds some failover in case one of the links in the group goes down.</p><p>One proposal I had planned was to use an OVS IntPort in order to tag all outbound packets with VLAN2, as <a href="https://www.reddit.com/r/googlefiber/comments/5ou8hy/how_to_use_your_own_router_on_google_fiber_and/?ref=blog.jenningsga.com">required</a> by Google Fiber. Unfortunately, this did not work as planned and I eventually did this configuration in pfSense, as we will see below.</p><h1 id="executing-on-the-proposal">Executing on the Proposal</h1><p>I setup the router separately before switching out the Google Fiber network box. Obviously, you will want to create the pfSense VM before exchanging the hardware. Next, I configured the Ubiquiti gear and the LACP trunk ports to the switch. Last, I installed all supplemental services, such as Pi-Hole.</p><h2 id="promox-installation-and-configuration">Promox Installation and Configuration</h2><p>Installation of Proxmox is very straightforward. Here is my final network configuration as indicated in the proposal.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.jenningsga.com/content/images/2020/02/cervantes-network-interfaces.png" class="kg-image" alt="Virtualize pfSense for Google Fiber - A Dream Networking Stack" loading="lazy"><figcaption>Final Proxmox Network Design</figcaption></figure><p>You should put the OVS bond interface in <strong>active-backup</strong> mode until you can configure the downstream switch for LACP. Then and only then, change this setting to <strong>LACP (balance-tcp)</strong>. This is where the management port comes in handy!</p><p>You may also notice that the NIC ports on the Qotom are not represented on the hypervisor host in the order that they are laid out on the chassis. I put a comment on each interface to know which physical port it is mapped to.</p><h2 id="pfsense-vm-creation">pfSense VM Creation</h2><p>I followed <a href="https://docs.netgate.com/pfsense/en/latest/virtualization/virtualizing-pfsense-with-proxmox.html?ref=blog.jenningsga.com">this guide</a> for virtualizing pfSense within Proxmox. The following are the resulting Virtual Machine details I used for pfSense.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.jenningsga.com/content/images/2020/02/pfsense-vm-hardware.png" class="kg-image" alt="Virtualize pfSense for Google Fiber - A Dream Networking Stack" loading="lazy"><figcaption>Hardware Details</figcaption></figure><p>In the advanced options, when setting up the CPU, you may enable the AES flag for the VM, if your processor supports it. Also, I decided to use ballooning for the memory configuration from 1GB minimum to 4GB maximum. Finally, make sure you enable VirtIO on the HDD and network devices as this will greatly affect performance. As in the guide above mentions, I did have to disable <strong>hardware checksum offloading</strong> in pfSense.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.jenningsga.com/content/images/2020/02/pfsense-vm-options.png" class="kg-image" alt="Virtualize pfSense for Google Fiber - A Dream Networking Stack" loading="lazy"><figcaption>pfSense VM Options</figcaption></figure><p>Also, be sure to enable the option to start the VM on boot and set a low order priority so that it starts before the other VMs and containers you have running on the hypervisor.</p><h1 id="replacing-the-google-fiber-box">Replacing the Google Fiber Box</h1><p>Replacing the Google Fiber network box was straightforward but did require resolving a few issues.</p><p>Like I said prior, my initial goal was to use the Open vSwitch IntPort to tag the outbound packets with VLAN 2. But this turned out to not work for a reason which I did not bother to spend time on. Instead, I followed <a href="https://homelab.nicktripp.com/2018/03/04/configuring-pfsense-for-google-fiber/?ref=blog.jenningsga.com">this guide</a> which configures the tagging within pfSense and the IntPort is mostly default and unused. As a side note, you could probably get away with using a simple bridge for the WAN in Proxmox instead.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.jenningsga.com/content/images/2020/02/pfsense-VLAN-WAN.png" class="kg-image" alt="Virtualize pfSense for Google Fiber - A Dream Networking Stack" loading="lazy"><figcaption>VLAN Interface</figcaption></figure><p>Within pfSense, create a VLAN interface with tag 2 and priority 3. Set the parent interface to the current WAN network interface.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.jenningsga.com/content/images/2020/02/pfsense-WAN-Assignments.png" class="kg-image" alt="Virtualize pfSense for Google Fiber - A Dream Networking Stack" loading="lazy"><figcaption>WAN Interface Assignment</figcaption></figure><p>In the Interface Assignments page, change the WAN assignment to the new VLAN 2 tagged interface.</p><p>Finally, you will want to power down the Google Fiber fiber jack for several minutes. This seemed to be required for pfSense to receive another WAN IP address from Google upstream.</p><h1 id="unifi-switch-configuration">Unifi Switch Configuration</h1><p>At this point in time, I had internet access but I still had the bond configured as <strong>active-backup</strong>. So, I created a LXC container within Proxmox and a nested Docker container within it to run <a href="https://github.com/jacobalberty/unifi-docker?ref=blog.jenningsga.com">this Unifi Controller container</a>.</p><p>After setting up the controller and adopting all of the Ubiquiti gear, I setup LACP on the Unifi switch.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.jenningsga.com/content/images/2020/02/core-switch-lacp.png" class="kg-image" alt="Virtualize pfSense for Google Fiber - A Dream Networking Stack" loading="lazy"><figcaption>Unifi Controller LACP</figcaption></figure><p>Then I used my management port to change the bond mode to <strong>LACP (balance-tcp) </strong>within Proxmox. And viola!</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.jenningsga.com/content/images/2020/02/core-switch-lacp-overview.png" class="kg-image" alt="Virtualize pfSense for Google Fiber - A Dream Networking Stack" loading="lazy"><figcaption>Unifi Controller - 2000 Speed LACP</figcaption></figure><p>The Unifi Controller service is really powerful and something that I had not used prior to this endeavor. It has a lot of insight into the network details and has a great overall user experience. I will definitely be spending a lot of time in this UI in the future.</p><h1 id="finalizing-the-configuration">Finalizing the Configuration</h1><p>In addition to all this, I created another LXC container on the Qotom to run Pi-Hole. Then set the pfSense default DNS server to be the Pi-Hole and set the DNS resolver to forward all requests to it.</p><p>The final static IP mappings and network services which are accessible from the LAN look like:</p><ul><li>pfSense Web UI - <a href="https://192.168.1.1/?ref=blog.jenningsga.com">https://192.168.1.1</a></li><li>Pi-Hole Web UI - <a href="http://192.168.1.2/admin/?ref=blog.jenningsga.com">http://192.168.1.2/admin/</a></li><li>Proxmox Web UI - <a href="https://192.168.1.3:8006/?ref=blog.jenningsga.com">https://192.168.1.3:8006</a></li><li>Unifi Controller Web UI - <a href="https://192.168.1.4:8443/?ref=blog.jenningsga.com">https://192.168.1.4:8443/</a></li></ul><h1 id="conclusions-and-future-work">Conclusions and Future Work</h1><p>At this point, I have so much more insight and configurability into my home network. The Ubiquiti products make it really simple to see weak points in a home WIFI network such as those caused by interference or inadequate coverage. In additional, pfSense has loads of functionality which I still need to go through.</p><p>For those that are wondering, I am seeing 940Mbps download and 750Mbps upload and the Qotom is not breaking a sweat.</p><p>Finally, there are certainly improvements I will make in the future, such as:</p><ul><li>Intrusion detection using Snort or Suricata</li><li><a href="https://blog.jenningsga.com/tracking-pfsense-firewall-events/">Forwarding firewall events to Elasticsearch</a></li><li>Proper VLAN tagging</li><li>OpenVPN</li><li>Guest WIFI</li></ul>]]></content:encoded></item><item><title><![CDATA[Securing Email for Better Inbox Placement]]></title><description><![CDATA[Common security standards for email sending in order improve email delivery placement.]]></description><link>https://blog.jenningsga.com/securing-email/</link><guid isPermaLink="false">632052e22a16870001d0a0da</guid><category><![CDATA[email]]></category><category><![CDATA[security]]></category><category><![CDATA[Web]]></category><category><![CDATA[Inbox]]></category><category><![CDATA[SMTP]]></category><category><![CDATA[DKIM]]></category><category><![CDATA[SPF]]></category><category><![CDATA[DMARC]]></category><dc:creator><![CDATA[Patrick Jennings]]></dc:creator><pubDate>Sat, 08 Feb 2020 22:26:54 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1563762270340-3f5fde3243cd?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1563762270340-3f5fde3243cd?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Securing Email for Better Inbox Placement"><p>In this post, we will look at common security standards for email sending in order improve email delivery placement. We will look at methods, such as DKIM and Sender Policy Framework (SPF), which we can use to establish trust between a sender and receiver. We will also look at a few tools which we can use to test and verify that a domain configuration is working properly.</p><h1 id="why-is-email-security-important">Why is Email Security Important?</h1><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://images.unsplash.com/photo-1477281765962-ef34e8bb0967?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" class="kg-image" alt="Securing Email for Better Inbox Placement" loading="lazy"><figcaption>Photo by <a href="https://unsplash.com/@theunsteady5?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit">Edwin Andrade</a> / <a href="https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit">Unsplash</a></figcaption></figure><p>The Simple Mail Transfer Protocol (<a href="https://en.wikipedia.org/wiki/Simple_Mail_Transfer_Protocol?ref=blog.jenningsga.com">SMTP</a>), which is the foundation for sending email today, was built during a time when encryption and authorization was not widely used. As more individuals, businesses, and entities relied on internet communication through email, it became apparent that protection from spoofing, spamming, and other fraudulent acts was necessary to add onto the protocol.</p><p>This brings us to our first point, <u>email is not secure by default</u>. It is trivial to create an email which looks like it is coming from another party. Just like it is possible to type another individuals name on a letterhead and send it through conventional mail, the same is possible with email. As a result, a set of standards for establishing trust between a sender and receiver were created.</p><p>For the above reasons, it is clear that Email Service Providers, such as Gmail and Outlook, <u>will rank secure and trusted email higher than insecure counterparts</u>. If you do not ensure proper security of your email and domain configuration, your email will be vastly more likely to end up in spam folders, caught by spam filters, or worse cause a domain blacklist. All of these scenarios will cause your delivery performance to be poor resulting in less opens and clicks and less outreach to your audience.</p><h1 id="email-security-used-today">Email Security Used Today</h1><figure class="kg-card kg-image-card"><img src="https://blog.jenningsga.com/content/images/2020/02/DKIM-Verification-Color-1.svg" class="kg-image" alt="Securing Email for Better Inbox Placement" loading="lazy"></figure><p>From an email service provider and user prospective, there are several security goals that need to be addressed:</p><ul><li>Validate that the content of an email has not been changed from sender to receiver.</li><li>Validate that the sender is authorized to send an email as a particular identity.</li></ul><p>Both of these concerns are addressed using public Domain Name System (<a href="https://en.wikipedia.org/wiki/Domain_Name_System?ref=blog.jenningsga.com">DNS</a>) records.</p><h2 id="dkim">DKIM</h2><p><a href="https://en.wikipedia.org/wiki/DomainKeys_Identified_Mail?ref=blog.jenningsga.com" rel="nofollow">DomainKeys Identified Mail (DKIM)</a> is used to detect email content modifications. DKIM does this by adding a domain signature to each email sent. This signature is computed using public key encryption of a hash of the headers and body of an email.</p><p>The idea is that the sender uses a private key to sign all outbound email and attaches the <code>DKIM-Signature</code> as a header to the email. The receiving email provider then uses this signature and a computation of the of the headers and body to verify the authenticity of the email content has not changed.</p><p>Setting up DKIM keys and DNS records will vary from system to system. Here are some example documents to get you started:</p><ul><li>G Suite: <a href="https://support.google.com/a/answer/174124?hl=en&amp;ref_topic=2752442&amp;visit_id=1-636320706039003987-1662906503&amp;rd=1&amp;ref=blog.jenningsga.com">Enhance security for outgoing email (DKIM)</a></li><li>Postfix: <a href="https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-dkim-with-postfix-on-debian-wheezy?ref=blog.jenningsga.com">How To Install and Configure DKIM with Postfix on Debian Wheezy</a></li></ul><h2 id="spf">SPF</h2><p>While DKIM validates the content of an email, the Sender Policy Framework (<a href="https://en.wikipedia.org/wiki/Sender_Policy_Framework?ref=blog.jenningsga.com">SPF</a>) is used to validate that a sender is authorized to send email as a particular domain or identity.</p><p>From the <a href="http://www.open-spf.org/FAQ/Common_mistakes/?ref=blog.jenningsga.com">Common Mistakes of SPF</a>, &quot;the purpose of SPF is to advertise your domain&apos;s mail servers&quot; and this done by specifying each server which is authorized to send email. This is done by using TXT records attached to the sending domain.</p><!--kg-card-begin: markdown--><pre><code>&quot;v=spf1 ip4:192.0.2.0/24 ip4:198.51.100.123 a -all&quot;
</code></pre>
<!--kg-card-end: markdown--><p>This record defines the framework version, an IPV4 address range, and an IPV4 address, along with allowing the A record to send email. The final <code>-all</code> specifies to drop all email sent not matching these patterns.</p><p>The SPF syntax can be found at: <a href="http://www.open-spf.org/SPF_Record_Syntax/?ref=blog.jenningsga.com">http://www.open-spf.org/SPF_Record_Syntax/</a></p><h2 id="dmarc">DMARC</h2><p>Finally, there is the Domain-based Message Authentication, Reporting and Conformance (<a href="https://en.wikipedia.org/wiki/DMARC?ref=blog.jenningsga.com">DMARC</a>) protocol. Building on top of the last two security measures, DMARC specifies that the domains used in the SPF and DKIM validations must align with the FROM header on an email message.</p><p>In addition, DMARC allows to define what to do when a breakage in validation of SPF or DKIM occurs. For example, you can tell a sender to send a report back to you on failures of validation.</p><p>For instructions on setting up DMARC, see:</p><ul><li>G Suite: <a href="https://support.google.com/a/answer/2466563?ref=blog.jenningsga.com">Enhance security for forged spam (DMARC)</a></li></ul><h1 id="validation-tools">Validation Tools</h1><p>To make sure that your domain configurations are functioning correctly, I recommend using the following tools:</p><ul><li>SparkPost/Port25 - <a href="https://www.sparkpost.com/email-tools/authentication-checker/?ref=blog.jenningsga.com">Authentication Checker</a></li><li>dmarcian - <a href="https://dmarcian.com/dmarc-tools/?ref=blog.jenningsga.com">DMARC Testing &amp; Reporting Tools</a></li><li>MXToolBox - <a href="https://mxtoolbox.com/spf.aspx?ref=blog.jenningsga.com">SPF Record Check &amp; SPF Lookup</a></li></ul>]]></content:encoded></item><item><title><![CDATA[Proxmox - Keeping Quorum with QDevices]]></title><description><![CDATA[We will develop a way of maintaining high availability with a two or even node count Proxmox cluster deployment. To do this, we will be using what is considered a QDevice.]]></description><link>https://blog.jenningsga.com/proxmox-keeping-quorum-with-qdevices/</link><guid isPermaLink="false">632052e22a16870001d0a0d7</guid><category><![CDATA[Proxmox]]></category><category><![CDATA[Linux]]></category><category><![CDATA[Corosync]]></category><category><![CDATA[Cluster]]></category><category><![CDATA[Quorum]]></category><category><![CDATA[Raspberry Pi]]></category><dc:creator><![CDATA[Patrick Jennings]]></dc:creator><pubDate>Sat, 04 Jan 2020 19:32:14 GMT</pubDate><media:content url="https://blog.jenningsga.com/content/images/2020/01/keeping_quorum_2.jpg" medium="image"/><content:encoded><![CDATA[<h1 id="introduction">Introduction</h1><img src="https://blog.jenningsga.com/content/images/2020/01/keeping_quorum_2.jpg" alt="Proxmox - Keeping Quorum with QDevices"><p>In this post, we will develop a way of maintaining high availability with a two or even node count Proxmox cluster deployment. To do this, we will be using what is considered a QDevice, in the Corosync nomenclature, in order to break ties and keep a quorate in the cluster. This will allow your hypervisors the ability to operate under failure conditions which would otherwise cause outages. The use of QDevice is only recommended for non-production deployments where another full Proxmox node is not feasible. For our purposes, we will be using a Raspberry Pi for the QDevice. Let&apos;s begin!</p><h1 id="understanding-the-benefits">Understanding the Benefits</h1><p>Proxmox uses the <a href="https://en.wikipedia.org/wiki/Corosync_Cluster_Engine?ref=blog.jenningsga.com">Corosync</a> cluster engine behind the scenes. The <a href="https://pve.proxmox.com/wiki/Cluster_Manager?ref=blog.jenningsga.com">Proxmox background services</a> rely on Corosync in order to communicate configuration changes between the nodes in the cluster.</p><p>In order to keep synchronization between the nodes, a Proxmox requirement is that at least three nodes must be added to the cluster. This may not be feasible in testing and homelab setups. In two node setups, what happens is that both nodes must always be operational in order for any change, such as starting, stopping, or creating VMs and containers, to be done.</p><p>In order to fix this, we can use an external QDevice whose sole purpose is to settle votes during times of node outage. This QDevice will not be a visible component of the Proxmox cluster and cannot run any virtual machine or container on them.</p><p>The benefits for adding a QDevice to a cluster include:</p><ul><li>Allowing modifications to the running hypervisor during single node downtime in a two node deployment.</li><li>Settling disputes in even node count deployments.</li></ul><h1 id="creating-the-qdevice">Creating the QDevice</h1><p>For my purpose, I am using a Raspberry PI 2B and the Arch ARM distribution for a slim Corosync QDevice node. My Proxmox two node cluster is running the latest Proxmox 6 which utilizes Corosync 3.</p><figure class="kg-card kg-image-card"><img src="https://blog.jenningsga.com/content/images/2020/01/RBP-Stack.jpg" class="kg-image" alt="Proxmox - Keeping Quorum with QDevices" loading="lazy"></figure><h2 id="installing-dependencies">Installing Dependencies</h2><p>On the RBP host, we will want to install the following corosync-qdevice dependency in the AUR:</p><ul><li><a href="https://aur.archlinux.org/packages/corosync-qdevice/?ref=blog.jenningsga.com">https://aur.archlinux.org/packages/corosync-qdevice/</a></li></ul><p>It should be noted that at time of writing, the following dependencies of this package do not include the <code>armv7h</code> arch in the PKGBUILD.</p><ul><li><a href="https://aur.archlinux.org/packages/libcgroup/?ref=blog.jenningsga.com">https://aur.archlinux.org/packages/libcgroup/</a></li><li><a href="https://aur.archlinux.org/packages/kronosnet/?ref=blog.jenningsga.com">https://aur.archlinux.org/packages/kronosnet/</a></li><li><a href="https://aur.archlinux.org/packages/corosync/?ref=blog.jenningsga.com">https://aur.archlinux.org/packages/corosync/</a></li></ul><p>You can safely add <code>armv7h</code> to the PKGBUILD manually and expect a successful build and install from these packages.</p><p>Once the package dependencies are met, we can start and enable the <code>corosync-qnetd</code> systemd service:</p><!--kg-card-begin: markdown--><pre><code class="language-bash">sudo systemctl start corosync-qnetd.service
sudo systemctl enable corosync-qnetd.service
</code></pre>
<!--kg-card-end: markdown--><h1 id="adding-the-qdevice-to-the-cluster">Adding the QDevice to the Cluster</h1><p>On each of the Proxmox nodes, you will need to do the following.</p><!--kg-card-begin: markdown--><ol>
<li>Make sure the QDevice can be reached via SSH.</li>
<li>Install Corosync QDevice and QNETd dependencies.
<ul>
<li><code>apt install corosync-qnetd corosync-qdevice</code></li>
</ul>
</li>
</ol>
<!--kg-card-end: markdown--><p>Once this is completed, we can run the Proxmox Qdevice setup as below:</p><!--kg-card-begin: markdown--><pre><code class="language-bash">pvecm qdevice setup &lt;rbp-ip&gt;
</code></pre>
<!--kg-card-end: markdown--><h1 id="verification-of-success">Verification of Success</h1><p>To verify, you can use the <code>pvecm status</code> command and see that the Qdevice has been added and that it contains a single vote as member of the cluster.</p><!--kg-card-begin: markdown--><pre><code class="language-bash">root@pve:~# pvecm status
Cluster information
-------------------
Name:             JenCluster
Config Version:   5
Transport:        knet
Secure auth:      on

Quorum information
------------------
Date:             Fri Jan  3 22:13:24 2020
Quorum provider:  corosync_votequorum
Nodes:            2
Node ID:          0x00000001
Ring ID:          1.2c
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   3
Highest expected: 3
Total votes:      3
Quorum:           2
Flags:            Quorate Qdevice

Membership information
----------------------
    Nodeid      Votes    Qdevice Name
0x00000001          1    A,V,NMW 192.168.1.131 (local)
0x00000002          1         NR 192.168.1.116
0x00000000          1            Qdevice
</code></pre>
<!--kg-card-end: markdown--><p>One important information which should be noted, is that you should remove the QDevice, as noted in the <a href="https://pve.proxmox.com/wiki/Cluster_Manager?ref=blog.jenningsga.com#_corosync_external_vote_support">Corosync External Vote Support</a>, before the addition of the Proxmox node to keep an odd node count in the cluster.</p>]]></content:encoded></item><item><title><![CDATA[A Technical Overview of Smartmontools]]></title><description><![CDATA[Smartmontools is a very well known toolset for monitoring and querying storage health. We will take a look at the code of Smartmontools and see what we can learn from it. We will also take a look at the Linux storage system hierarchy and how communication takes place from a softwares perspective.]]></description><link>https://blog.jenningsga.com/technical-overview-of-smartmontools/</link><guid isPermaLink="false">632052e22a16870001d0a0d6</guid><category><![CDATA[Linux]]></category><category><![CDATA[Storage]]></category><dc:creator><![CDATA[Patrick Jennings]]></dc:creator><pubDate>Sun, 22 Sep 2019 16:04:38 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1484662020986-75935d2ebc66?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<h1 id="introduction">Introduction</h1><img src="https://images.unsplash.com/photo-1484662020986-75935d2ebc66?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" alt="A Technical Overview of Smartmontools"><p><a href="https://www.smartmontools.org/?ref=blog.jenningsga.com">Smartmontools</a> is a very well known toolset for monitoring and querying storage health. The information that the project utilizes is a part of the <em>Self-Monitoring, Analysis and Reporting Technology System</em> (or <a href="https://en.wikipedia.org/wiki/S.M.A.R.T.?ref=blog.jenningsga.com">S.M.A.R.T</a>.) which is a standard implemented by many modern hard drives. In this post, we are going to take a look at the code of Smartmontools and see what we can learn from it. We will also take a look at the Linux storage system hierarchy and how communication takes place from a softwares perspective. Let&apos;s begin!</p><h2 id="checking-out-the-codebase">Checking out the Codebase</h2><p>We will be checking out the code through the official SVN repository, although a Github mirror exists as well. <a href="https://www.smartmontools.org/browser?rev=4934&amp;ref=blog.jenningsga.com">Revision 4934</a> will be used throughout this review.</p><p>Checking out the codebase is as easy as:</p><!--kg-card-begin: markdown--><pre><code class="language-bash">svn co https://svn.code.sf.net/p/smartmontools/code/trunk/smartmontools smartmontools
</code></pre>
<!--kg-card-end: markdown--><p></p><h3 id="building-the-code">Building the Code</h3><p>We will be using Linux for building and reviewing the code. Smartmontools uses Automake as the build system. From a clean repository, we can run the build to produce the binaries with the following commands:</p><!--kg-card-begin: markdown--><pre><code class="language-bash">./autogen.sh
./configure
make
sudo make install
</code></pre>
<!--kg-card-end: markdown--><p></p><p>Reading the INSTALL file will give details on specifics of default settings and overrides which are possible during compilation.</p><h2 id="reviewing-the-interfaces-and-data-structures">Reviewing the Interfaces and Data Structures</h2><p>The code is written in C++ with use of a few base classes and mixins for code abstractions across the supported platforms.</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://blog.jenningsga.com/content/images/2019/09/smartmontools_interfaces_descriptions-2.svg" class="kg-image" alt="A Technical Overview of Smartmontools" loading="lazy"><figcaption>Figure 1: Platform Inheritance Hierarchy (Not a Complete Depiction)</figcaption></figure><p>The smart device and smart interface classes are the basic building blocks of the code. In addition, generic ATA, SCSI, and NVMe type classes are included for extension. These base classes are defined in <code>dev_interface.h</code>.</p><p>The smart_device class can be used to downcast each device object into specific implementation of the extended smart_device classes such as:</p><ul><li><code>smart_device.to_ata()</code> returns <code>ata_device</code></li><li><code>smart_device.to_scsi()</code> returns <code>scsi_device</code></li><li><code>smart_device.to_nvme()</code> returns <code>nvme_device</code></li></ul><p>There are ten platforms that are supported by Smartmontools. Each of these platforms have their own unique structures and driver interfaces for querying the underlying storage. Smartmontools uses object inheritance in order to differentiate the implementations of the smart device classes and smart interfaces for each platform. These are defined in the <code>os_&lt;platform&gt;</code><em> </em>cpp and header files.</p><p>In addition, the code uses a global registry singleton object to define the smart interface to use for the platform being supported at compile time. At the end of each of the platform specific source files, <code>smart_interface::init</code><em> </em>will be implemented which will set the global interface to the platform specific implementation.</p><p>For example, in <code>os_linux.cpp</code>, the following code is used to register the linux smart interface as the source of truth:</p><!--kg-card-begin: markdown--><pre><code class="language-cpp">void smart_interface::init()
{
  static os_linux::linux_smart_interface the_interface;
  smart_interface::set(&amp;the_interface);
}
</code></pre>
<!--kg-card-end: markdown--><p></p><p>In conclusion, we have found how the codebase makes use of object inheritance in order to differentiate the methods of pulling S.M.A.R.T. data from the underlying storage. This provides a clean interface for extension and addition of future devices based on the platform running the executable.</p><h1 id="getting-to-the-meat-of-the-pie">Getting to the Meat of the Pie</h1><p>As we go deeper into the code, we find some more interesting things to learn. For instance, the logic used to scan for devices and detect a device type. Another example we will examine is how Smartmontools is able to gather information about storage behind RAID controllers and USB devices.</p><h3 id="understanding-storage-device-types">Understanding Storage Device Types</h3><p>One aspect we glossed over is the supported device types. There are three standards for communication with mass media used from a software perspective: ATA, SCSI, and the newer NVMe. Each offers different command structures for sending requests and receiving responses from the storage.</p><p>The <a href="http://www.t13.org/Standards/Default.aspx?DocumentType=3&amp;DocumentStage=2&amp;ref=blog.jenningsga.com"><a href="http://www.t13.org/Standards/Default.aspx?DocumentType=3&amp;DocumentStage=2&amp;ref=blog.jenningsga.com">ATA standard</a>s</a> are commonly found on desktop computers. First appearing as <a href="https://en.wikipedia.org/wiki/Parallel_ATA?ref=blog.jenningsga.com">Parallel ATA</a> (PATA) which used the IDE &quot;ribbon&quot; cables. The more modern <a href="https://en.wikipedia.org/wiki/Serial_ATA?ref=blog.jenningsga.com">SATA</a> devices, connectors, and host controllers continue to utilize the ATA command set. There is also the improved <a href="https://en.wikipedia.org/wiki/Advanced_Host_Controller_Interface?ref=blog.jenningsga.com">AHCI</a> standard for SATA which includes additional instructions such as <a href="https://en.wikipedia.org/wiki/Native_Command_Queuing?ref=blog.jenningsga.com">Native Command Queueing</a> and TRIM support for SSDs.</p><p><a href="http://www.t10.org/scsi-3.htm?ref=blog.jenningsga.com">SCSI</a> is an older standard written for more than just hard disks as compared with ATA. <a href="https://en.wikipedia.org/wiki/Serial_Attached_SCSI?ref=blog.jenningsga.com">Serial Attached SCSI</a> (SAS) is a replacement of <a href="https://en.wikipedia.org/wiki/Parallel_SCSI?ref=blog.jenningsga.com">Parallel SCSI</a> (SPI) and is often found in servers and enterprise storage hardware. One thing to note is that SATA drives are compatible with SAS controllers but SAS drives may not be used with SATA drives.</p><p>Finally, there is the <a href="https://en.wikipedia.org/wiki/NVM_Express?ref=blog.jenningsga.com">NVM Express</a> (NVMe) standard which is used with modern SSD devices. This standard was written from the ground up with the low latency and high parallelism of SSDs in mind. NVMe is gaining more traction in desktop and enterprise environments and will continue to gain market share as SSD devices become more economical and powerful.</p><h3 id="scanning-storage-devices">Scanning Storage Devices</h3><p>With smartctl, one can scan for all devices connected to the system. As an example:</p><!--kg-card-begin: markdown--><pre><code class="language-bash">$ smartctl --scan
/dev/sda -d scsi # /dev/sda, SCSI device
/dev/sdb -d scsi # /dev/sdb, SCSI device
</code></pre>
<!--kg-card-end: markdown--><p>We can see here, that two SCSI devices are detected on the system. Let&apos;s take a look at how Smartmontools detects and reports on these devices.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.jenningsga.com/content/images/2019/09/Smartmontools_Paths-2.svg" class="kg-image" alt="A Technical Overview of Smartmontools" loading="lazy"><figcaption>Figure 2: Linux Glob Patterns Used for Searching for Devices</figcaption></figure><p>Within the Linux subsystem, storage devices are exposed through specific device nodes on the udev or devtmpfs mounted filesystem at <code>/dev/</code>. The codebase makes use of naming conventions used by Linux in order to scan for the storage devices attached to the system.</p><p>These device nodes can be deceiving and may not accurately depict whether a device is ATA, SCSI, or NVMe. For example, let&apos;s look at the device located at <code>/dev/sdb</code>:</p><!--kg-card-begin: markdown--><pre><code class="language-bash">$ udevadm info -a -n /dev/sdb | grep -E &apos;looking|DRIVER&apos;
  looking at device &apos;/devices/pci0000:00/0000:00:1f.2/ata2/host1/target1:0:0/1:0:0:0/block/sdb&apos;:
    DRIVER==&quot;&quot;
  looking at parent device &apos;/devices/pci0000:00/0000:00:1f.2/ata2/host1/target1:0:0/1:0:0:0&apos;:
    DRIVERS==&quot;sd&quot;
  looking at parent device &apos;/devices/pci0000:00/0000:00:1f.2/ata2/host1/target1:0:0&apos;:
    DRIVERS==&quot;&quot;
  looking at parent device &apos;/devices/pci0000:00/0000:00:1f.2/ata2/host1&apos;:
    DRIVERS==&quot;&quot;
  looking at parent device &apos;/devices/pci0000:00/0000:00:1f.2/ata2&apos;:
    DRIVERS==&quot;&quot;
  looking at parent device &apos;/devices/pci0000:00/0000:00:1f.2&apos;:
    DRIVERS==&quot;ahci&quot;
  looking at parent device &apos;/devices/pci0000:00&apos;:
    DRIVERS==&quot;&quot;
</code></pre>
<!--kg-card-end: markdown--><p>As we can see, the <code>/dev/sdb</code> block device uses the <a href="https://linux.die.net/man/4/sd?ref=blog.jenningsga.com">sd kernel driver</a> which handles SCSI devices. In the parent device hierarchy for <code>/dev/sdb</code>, we can see that the ahci kernel driver is used for the <code>ata2</code> host adapter. AHCI is a standard used with SATA, and thus this is an indicator that the underlying device may be SATA.</p><!--kg-card-begin: markdown--><pre><code class="language-bash">$ lsscsi
[0:0:0:0]    disk    ATA      Samsung SSD 850  2B6Q  /dev/sda
[1:0:0:0]    disk    ATA      HGST HTS725050A7 B550  /dev/sdb
</code></pre>
<!--kg-card-end: markdown--><p>As we can continue to probe the system with the <code>lssci</code> tool, we see that <code>/dev/sdb</code> is detected as ATA. So why is Smartmontools not detecting the proper device type?</p><h3 id="auto-detecting-device-types">Auto-Detecting Device Types</h3><p>As we discussed earlier, the SCSI command set is often times used for more than hard drives. USB and RAID devices are commonly configured as SCSI devices regardless of the storage hardware they are connecting to on the system.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.jenningsga.com/content/images/2019/09/Linux-storage-stack-diagram_v4.10.png" class="kg-image" alt="A Technical Overview of Smartmontools" loading="lazy"><figcaption><a href="https://www.thomas-krenn.com/en/wiki/Linux_Storage_Stack_Diagram?ref=blog.jenningsga.com">Linux Storage Stack Diagram</a></figcaption></figure><p>Within the Linux kernel itself, we can see above that the <strong>SCSI mid layer</strong> is responsible for many device types and handles the translation to low-level drivers including libata (for ATA hardware).</p><p>Smartmontools uses a few techniques to detect the underlying storage type behind these generic block storage devices. One is by issuing a SCSI INQUIRY command to the host device and parsing the result. According to the T10 Specification, an ATA device should respond with a vendor identification of &apos;ATA &#xA0; &#xA0; &apos; &#xA0;(ATA in an 8 byte frame).</p><p>Using our previous <code>/dev/sdb</code> and the <code>sg_inq</code> tool, we can see that this is indeed the case:</p><!--kg-card-begin: markdown--><pre><code class="language-bash">$ sg_inq /dev/sdb | grep &apos;Vendor identification&apos;
 Vendor identification: ATA
</code></pre>
<!--kg-card-end: markdown--><p></p><p>Smartmontools will also detect this and use <a href="https://en.wikipedia.org/wiki/SCSI_/_ATA_Translation?ref=blog.jenningsga.com">SCSI/ATA Translation</a> (SAT) to communicate through the SCSI application layer directly to the ATA device. Under the hood, this is done using the <code>autodetect_open</code> methods. Unfortunately, the <code>smartctl --scan</code> does not go down this code path so will not always report on the proper device type. Running <code>smartctl -a /dev/sdb</code> does auto detect the device type and will give you a more accurate understanding of the actual hardware configuration.</p><h3 id="understanding-passthrough-devices">Understanding Passthrough Devices</h3><p>There are other instances of a storage device not being directly accessible to the system. For instance, a hard drive may be attached through a USB bridge, such as with an external hard drive or USB drive. We have seen previously that these USB devices can be expressed as generic SCSI devices through the kernel drivers. In these cases, all SCSI commands for querying S.M.A.R.T. data must be passed through the controller on the USB bridge to the hard drive. In order to query the S.M.A.R.T data of the device, the request must go through SAT or <a href="https://www.nvmexpress.org/wp-content/uploads/NVM-Express-SCSI-Translation-Reference-1_1-Gold.pdf?ref=blog.jenningsga.com">SCSI / NVME Translation</a>.</p><p>Another instance requiring custom passthrough commands are for <a href="https://www.smartmontools.org/wiki/Supported_RAID-Controllers?ref=blog.jenningsga.com">RAID controllers</a>. For example, one class of RAID controllers which are supported by Smartmontools is the MegaRAID. Again, MegaRAID devices will be exposed as a block device on the Linux system through the sd kernel module similar to the following:</p><!--kg-card-begin: markdown--><pre><code class="language-bash">$ udevadm info -a -n /dev/sda | grep -E &apos;looking|DRIVER&apos;
  looking at device &apos;/devices/pci0000:00/0000:00:07.0/0000:06:00.0/host0/target0:2:0/0:2:0:0/block/sda&apos;:
    DRIVER==&quot;&quot;
  looking at parent device &apos;/devices/pci0000:00/0000:00:07.0/0000:06:00.0/host0/target0:2:0/0:2:0:0&apos;:
    DRIVERS==&quot;sd&quot;
  looking at parent device &apos;/devices/pci0000:00/0000:00:07.0/0000:06:00.0/host0/target0:2:0&apos;:
    DRIVERS==&quot;&quot;
  looking at parent device &apos;/devices/pci0000:00/0000:00:07.0/0000:06:00.0/host0&apos;:
    DRIVERS==&quot;&quot;
  looking at parent device &apos;/devices/pci0000:00/0000:00:07.0/0000:06:00.0&apos;:
    DRIVERS==&quot;megaraid_sas&quot;
  looking at parent device &apos;/devices/pci0000:00/0000:00:07.0&apos;:
    DRIVERS==&quot;pcieport&quot;
  looking at parent device &apos;/devices/pci0000:00&apos;:
    DRIVERS==&quot;&quot;
</code></pre>
<!--kg-card-end: markdown--><p>In order to gain information about the underlying storage behind a RAID device, Smartmontools must issue passthrough commands to the adapter. This is done using special <a href="https://en.wikipedia.org/wiki/Ioctl?ref=blog.jenningsga.com">IOCTL</a> system calls which will be interpreted by the kernel driver and will be passed along to the firmware. The IOCTL command consists of a specially constructed packet which is sent to the kernel driver for processing. This packet will contain the necessary passthrough command fields such as:</p><ul><li>SCSI Command</li><li>SCSI Command Data</li><li>Target device ID</li></ul><p>Since IOCTLs are not standard across all device drivers or RAID controllers, Smartmontools must create custom device classes to support all of the different ways to passthrough commands to the storage hardware.</p><h2 id="conclusion">Conclusion</h2><p>We&apos;ve taken a look at how Smartmontools operates and how it is able to query the different device types. We have also taken a deep dive into the different device types and how they differ between each other. In addition, the Linux storage subsystem was explored to get an idea of how devices are exposed through the Operating System. Finally, we took a look at home Smartmontools issues translation and passthrough commands to communicate with storage behind many different bridges and controllers.</p>]]></content:encoded></item><item><title><![CDATA[Python - A House Divided]]></title><description><![CDATA[An in-depth history into Python 3. Changes in syntax, incompatibilities, and tip and tricks to convert and support the latest Python from prior versions.]]></description><link>https://blog.jenningsga.com/python-a-house-divided/</link><guid isPermaLink="false">632052e22a16870001d0a0d4</guid><category><![CDATA[Python]]></category><dc:creator><![CDATA[Patrick Jennings]]></dc:creator><pubDate>Mon, 29 Jul 2019 02:15:47 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1509255929945-586a420363cf?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<h1 id="introduction">Introduction</h1><img src="https://images.unsplash.com/photo-1509255929945-586a420363cf?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Python - A House Divided"><p>This is a story about how a house holy was cut in twine. When an idea where beautiful is better than ugly goes too far. Today we are talking about the transition from Python 2 to Python 3 and the war that ensued.</p><h1 id="python-history-201">Python History 201</h1><p>In April of 2006, a decision was made to push a major release to the Python programming language. At the time, Python 2.5 was the latest stable release available. It was decided that after the next release of 2.6, the language would go through a major transition where core functionality would change and not be backwards compatible with prior versions. This new major release would be called Python 3.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.jenningsga.com/content/images/2019/03/python3_timeline-2.png" class="kg-image" alt="Python - A House Divided" loading="lazy"><figcaption>The Proposed Python 2.6 and 3.0 Release Schedule</figcaption></figure><p><a href="https://en.wikipedia.org/wiki/Guido_van_Rossum?ref=blog.jenningsga.com">Guido van Rossum</a>, the author and authority of Python, layed out a <a href="https://www.python.org/dev/peps/pep-0361/?ref=blog.jenningsga.com">timeline</a> in which the Python 2 and prior codebase would be maintained. It was considered that in 2013 the community would embrace the new transition and the prior versions would no longer be supported. As we know now, this timeline would stretch for much further proving just how accurate us programmers are at estimating time.</p><figure class="kg-card kg-image-card"><img src="https://blog.jenningsga.com/content/images/2019/03/pep3000-1.png" class="kg-image" alt="Python - A House Divided" loading="lazy"></figure><p>The changes for Python 3 were so great, that the Python Enhanced Proposals, also known as <a href="https://www.python.org/dev/peps/?ref=blog.jenningsga.com">PEPs</a> for short, would be incremented to <a href="https://www.python.org/dev/peps/pep-3000/?ref=blog.jenningsga.com">PEP-3000</a> to indicate the new version of the language. In the next sections, we will look at the modifications to the language, how this could break code written for Python 2, and how to migrate existing codebases to Python 3.</p><h1 id="changes-in-python-3">Changes in Python 3</h1><p></p><!--kg-card-begin: markdown--><blockquote>
<p>There should be one-- and preferably only one --obvious way to do it</p>
</blockquote>
<!--kg-card-end: markdown--><p>One of the main scriptures followed by Pythonistas is <a href="https://www.python.org/dev/peps/pep-0020/?ref=blog.jenningsga.com">The Zen of Python</a>. In it, a particular pragma states that there should be only one clear and concise way of doing something within the language. Because of the growth of the Python, there had become many ways of using packages and syntaxes which did not follow standards set by other similar functionalities in the language. Let&apos;s take a look at how Python 3 rectified the inconsistencies but at the cost of incompatibility with older syntaxes in the language.</p><h2 id="print-is-now-a-function">Print is Now a Function</h2><p></p><!--kg-card-begin: markdown--><blockquote>
<p>print is the only application-level functionality that has a statement dedicated to it.</p>
</blockquote>
<!--kg-card-end: markdown--><p>Prior to the change outlined in <a href="https://www.python.org/dev/peps/pep-3105/?ref=blog.jenningsga.com">PEP-3105</a>, <em>print</em> was a language statement. <strong>With Python 3, print would now be considered a builtin function</strong>. According to the PEP, the print statement was an exception to the rule and Guido regretted this particular construct in the language.</p><p>Changing <em>print</em> to a function lead to a few differences which broke backwards compatibility.</p><!--kg-card-begin: markdown--><pre><code class="language-python">&gt;&gt;&gt; print(&apos;Python&apos;, &apos;2&apos;)
(&apos;Python&apos;, &apos;2&apos;)
</code></pre>
<!--kg-card-end: markdown--><p></p><p>For Python 2, this call is the equivalent to writing <code>print tuple(&apos;Python&apos;, &apos;2&apos;)</code> or in other words: printing a <em>tuple </em>collection type containing the two strings &apos;Python&apos; and &apos;2&apos; to the builtin function <code>print</code>.</p><!--kg-card-begin: markdown--><pre><code class="language-python">&gt;&gt;&gt; print(&apos;Python&apos;, &apos;3&apos;)
Python 3
</code></pre>
<!--kg-card-end: markdown--><p></p><p>For Python 3, since <em>print</em> was converted to a function, the call above results in passing two strings as arguments to the <em>print</em> function.</p><p>The differences are subtle but this did result in breaking functionality for some programs.</p><h2 id="dictionary-keys-and-values">Dictionary Keys and Values</h2><p>The interface used for looping over dictionary items was also changed for Python 3. Prior, there were two redundant sets of ways to iterate over a dictionary and its elements using these methods:</p><ul><li><em>dict.keys() </em>and <em>dict.iterkeys()</em></li><li><em>dict.values() </em>and <em>dict.itervalues()</em></li><li><em>dict.items() </em>and <em>dict.iteritems()</em></li></ul><p><em>keys, values, </em>and<em> items</em> return a list type. While the <em>iterkeys, itervalues</em>, and <em>iteritems </em>methods return an iterator type. Otherwise, the two sets of methods serve the same purpose and could be used to get the same data. For this reason, <strong>it was decided to remove <em>iterkeys</em>, <em>itervalues</em>, and <em>iteritems</em> methods and only support <em>keys</em>, <em>values</em>, and <em>items</em>.</strong></p><p>In addition, the data types returned from <em>keys, values, </em>and<em> items</em> were changed to a lightweight <em>set</em> equivalent type. This allows for direct comparison of results and also removes the unnecessary copying that was done internally in Python 2.</p><p>This leads to some differences as can be seen below.</p><!--kg-card-begin: markdown--><pre><code class="language-python">&gt;&gt;&gt; {&apos;python&apos;: 2}.items()[0]
(&apos;python&apos;, 2)
</code></pre>
<!--kg-card-end: markdown--><p></p><p>In this Python 2 example, we can see that we can index into the return of the items call due to it being a list type.</p><!--kg-card-begin: markdown--><pre><code class="language-python">&gt;&gt;&gt; {&apos;python&apos;: 3}.items()[0]
Traceback (most recent call last):
  File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt;
TypeError: &apos;dict_items&apos; object does not support indexing
</code></pre>
<!--kg-card-end: markdown--><p></p><p>But, in the Python 3 example above, we cannot index into the result due to the change in the data type returned.</p><p><a href="https://www.python.org/dev/peps/pep-3106/?ref=blog.jenningsga.com">PEP-3106</a> covers these changes in detail.</p><h2 id="reorganization-of-the-standard-library">Reorganization of the Standard Library</h2><p>The next breaking change encompassed within Python 3 were the efforts to normalize the modules of the standard library. This was evaluated in <a href="https://www.python.org/dev/peps/pep-3108/?ref=blog.jenningsga.com">PEP-3108</a>.</p><!--kg-card-begin: markdown--><blockquote>
<p>Just like the language itself, Python&apos;s standard library (stdlib) has grown over the years to be very rich. But over time some modules have lost their need to be included with Python. There has also been an introduction of a naming convention for modules since Python&apos;s inception that not all modules follow.</p>
</blockquote>
<!--kg-card-end: markdown--><p>Python has been around for a long time. Many do not realize that Python is older than many other popular programming languages such as Java, Javascript, and Ruby. In that time, standards such as naming conventions changed and the migration to <strong>Python 3 was used to normalize some of the inconsistencies in naming within the standard library.</strong></p><p>One of the changes that has affected many Python processes was the renaming of the <a href="https://www.python.org/dev/peps/pep-3108/?ref=blog.jenningsga.com#urllib-package">urllib package</a> and its contents. The urllib package holds functionality to create HTTP requests, call HTTP endpoints, and parse HTTP responses.</p><p>Other modules renamed included:</p><ul><li><em>html</em></li><li><em>http</em></li><li><em>tkinter</em></li><li><em>xmlrpc</em></li></ul><h2 id="raising-and-catching-exceptions">Raising and Catching Exceptions</h2><p><a href="https://www.python.org/dev/peps/pep-3109/?ref=blog.jenningsga.com">PEP-3109</a> and <a href="https://www.python.org/dev/peps/pep-3110/?ref=blog.jenningsga.com">PEP-3110</a> contain the details for the changes to syntax around raising and catching exceptions in Python 3.</p><p>In Python 2, there were several ways to raise exceptions:</p><!--kg-card-begin: markdown--><pre><code class="language-python">raise Exception, &apos;blah&apos; # Python 2
raise Exception(&apos;blah&apos;) # Python 2 and 3
</code></pre>
<!--kg-card-end: markdown--><p></p><p>This was another case where there was duplicated functionality which accomplished the same result.<strong> It was proposed to remove the first syntax and keep the second when raising exceptions in Python 3.</strong></p><p>Similarly, catching exceptions had multiple syntaxes which were equivalent.</p><!--kg-card-begin: markdown--><pre><code class="language-python">try:
    ...
except Exception, e: # Python 2
    ...
   
try:
    ...
except Exception as e: # Python 2 and 3
    ...
</code></pre>
<!--kg-card-end: markdown--><p></p><p>Again, in Python 3 the first syntax is no longer valid and the latter was kept.</p><h2 id="bytes-versus-strings">Bytes versus Strings</h2><p>Prior to Python 3, byte and string data types were used interchangeably. This was due to the fact that the default type for string literals in the language was the bytestring type.</p><!--kg-card-begin: markdown--><pre><code class="language-python">&apos;blah&apos; == b&apos;blah&apos; # Only returns True before Python 3
</code></pre>
<!--kg-card-end: markdown--><p></p><p>The problem with this functionality is that byte objects do not contain encoding information. So when converting a byte array to a unicode string, the runtime cannot determine the appropriate character set to use automatically. Thus, the language should not allow conversion freely between the types without having the option to set the encoding schema by the programmer.</p><p>In Python 3, it was decided to separate bytes and strings. String literals are now considered proper unicode strings with UTF-8 encoding by default. To convert between bytes and strings, you may use <em>encode </em>and <em>decode</em> where the encoding can be changed.</p><p>This was a step in the right direction for providing proper unicode support for the language but obviously came at the expense of breaking a subset of existing code. The enhancement was proposed as a part of <a href="https://www.python.org/dev/peps/pep-0358/?ref=blog.jenningsga.com">PEP-358</a>.</p><h1 id="converting-from-python-2-to-python-3">Converting from Python 2 to Python 3</h1><p>As we have seen, the change set for Python 3 contained a lot of syntactical and functional modifications to the language. Many of these changes could break processes written with only Python 2 syntax and packages in mind. So how does one migrate to Python 3? And how does a maintainer support both versions?</p><p>The general framework to accomplishing a migration to Python 3 can be followed.</p><ul><li>Migrate to the latest Python 2.7 release.</li><li>Use unit testing with sufficient code coverage to test all points in the code.</li><li>Enable logging and watch for warnings around deprecated functions.</li><li>Use the <code>__future__</code> module in order to support both Python 2 and Python 3.</li></ul><p>An important module to consider, during version migration and version compatibility support, is the <code>__future__</code> module. Outlined in <a href="https://www.python.org/dev/peps/pep-0236/?ref=blog.jenningsga.com">PEP-236</a>, importing one of the future statements allows a programmer to introduce the new syntax of newer versions of the core language into older versions of the language. The benefit of using this construct is that code can be updated to new syntax one file at a time as well as maintain support for both versions of the language.</p><!--kg-card-begin: markdown--><pre><code class="language-python"># From python-future.org - Quick Start Guide:
from __future__ import (absolute_import, division,
                        print_function, unicode_literals)
from builtins import *
</code></pre>
<!--kg-card-end: markdown--><p></p><p>Obviously, using future statements will not change your code to the new syntax. If you are looking for a more automated method of converting to Python 3, look into the <a href="https://docs.python.org/2/library/2to3.html?ref=blog.jenningsga.com">2to3</a> translation script. With this, Python source files are passed in and a series of fixers are applied to convert syntax in place.</p><p>Another important feature to enable during unit testing is setting the <code>-3</code> flag during execution. Setting the <a href="https://docs.python.org/2/using/cmdline.html?ref=blog.jenningsga.com#cmdoption-3">python -3</a> flag enables the outputting of specific deprecation warnings to be visible during execution of code.</p><h1 id="dropping-support-for-python-2">Dropping Support for Python 2</h1><p>We have seen how the many syntax changes in Python 3 have caused compatibility issues between the older versions of the language. And we have seen some of the ways in which we can support Python 2 and 3 as package maintainers. Unfortunately, at the scale in which Python is growing, the Python collective are unable to continue to support both versions forever and at some time, Python 2 support must be dropped.</p><p>The end of life of Python 2.7 was officially <a href="https://www.python.org/dev/peps/pep-0373/?ref=blog.jenningsga.com">extended until 2020</a>. After, we will see many popular Python communities dropping support. Many projects are already tracking this timeline and planning the EOL of Python 2:</p><ul><li><a href="https://docs.djangoproject.com/en/2.1/releases/2.0/?ref=blog.jenningsga.com">Django 2.0 no longer supports Python 2</a></li><li><a href="https://github.com/tox-dev/tox/issues/1130?ref=blog.jenningsga.com">tox is tracking an issue for dropping Python 2 support</a></li><li><a href="https://github.com/pypa/pip/issues/6148?ref=blog.jenningsga.com">Using pip in a Python 2 environment emits a deprecation warning</a></li></ul><h1 id="improvements-in-python-3">Improvements in Python 3</h1><p>We have gone over many of the problems with the migration from Python 2 to 3. But is it worth the effort to do so? Take a look at some of the new functionalities available in Python 3. Some will certainly change your mind.</p><ul><li>The new <a href="https://docs.python.org/3/reference/lexical_analysis.html?ref=blog.jenningsga.com#f-strings">fstring</a> syntax.</li><li>The new <a href="https://docs.python.org/3.5/library/multiprocessing.html?ref=blog.jenningsga.com">multiprocessing</a> packages.</li><li>Addition of asynchronous coroutines in the <a href="https://docs.python.org/3/library/asyncio.html?ref=blog.jenningsga.com">asyncio</a> package.</li><li>Support for <a href="https://docs.python.org/3/library/typing.html?ref=blog.jenningsga.com">type hinting</a> which derived static code analyzers such as <a href="http://mypy-lang.org/?ref=blog.jenningsga.com">mypy</a>.</li><li><a href="https://www.python.org/dev/peps/pep-0468/?ref=blog.jenningsga.com">Dictionaries preserving order</a>.</li><li>The <a href="https://www.python.org/dev/peps/pep-3135/?ref=blog.jenningsga.com">new super style</a> for class inheritance.</li><li>The <a href="https://www.python.org/dev/peps/pep-3119/?ref=blog.jenningsga.com">Abstract Base Class</a> package.</li></ul>]]></content:encoded></item><item><title><![CDATA[Steams Proton Brings Gaming to Linux]]></title><description><![CDATA[Proton is a compatibility tool which allows running Windows only games from within a Linux desktop.]]></description><link>https://blog.jenningsga.com/steam-proton-brings-gaming-to-linux/</link><guid isPermaLink="false">632052e22a16870001d0a0d5</guid><category><![CDATA[Linux]]></category><category><![CDATA[Gaming]]></category><dc:creator><![CDATA[Patrick Jennings]]></dc:creator><pubDate>Sat, 18 May 2019 23:21:20 GMT</pubDate><media:content url="https://blog.jenningsga.com/content/images/2019/05/valve.jpg" medium="image"/><content:encoded><![CDATA[<h1 id="proton">Proton</h1><img src="https://blog.jenningsga.com/content/images/2019/05/valve.jpg" alt="Steams Proton Brings Gaming to Linux"><p><strong><a href="https://github.com/ValveSoftware/Proton/?ref=blog.jenningsga.com">Proton</a></strong> is a compatibility tool which allows running Windows only games from within a Linux desktop. The reason for releasing and supporting this type of tool is unclear. Linux use is <a href="https://store.steampowered.com/hwsurvey?ref=blog.jenningsga.com">less than 1%</a> according to the Steam Hardware &amp; Software Survey. That being said, it is always good to increase your marketability.</p><p>Proton was very well received within the Linux community. Under the hood, Proton integrates <a href="https://www.winehq.org/?ref=blog.jenningsga.com">Wine</a>, <a href="https://github.com/doitsujin/dxvk?ref=blog.jenningsga.com">DXVK</a>, and <a href="https://github.com/FNA-XNA/FNA?ref=blog.jenningsga.com">FNA</a> in order to emulate the Windows APIs commonly used by many game engines. This compatibility is still very much in its infancy and users are encouraged to report any bugs through the <a href="https://github.com/ValveSoftware/Proton/issues?ref=blog.jenningsga.com">Proton Issues</a> on their GitHub.</p><p>This is a beta-only feature available by enabling the <a href="https://steamcommunity.com/sharedfiles/filedetails/?id=182912431&amp;ref=blog.jenningsga.com">beta client</a>. To do so, simply go to your Account Settings in Steam and enable to Beta participation.</p><figure class="kg-card kg-image-card"><img src="https://blog.jenningsga.com/content/images/2019/05/BetaClient.png" class="kg-image" alt="Steams Proton Brings Gaming to Linux" loading="lazy"></figure><p>The usage within the Steam client works pretty well. Games that are Windows only will be now be visible within the Steam Library. Within a games settings, specific Proton versions can be selected.</p><figure class="kg-card kg-image-card"><img src="https://blog.jenningsga.com/content/images/2019/05/proton_settings.png" class="kg-image" alt="Steams Proton Brings Gaming to Linux" loading="lazy"></figure><h1 id="compatibility">Compatibility</h1><p>Compatibility for games varies. The <a href="https://github.com/ValveSoftware/Proton/issues?ref=blog.jenningsga.com">Steam Issues</a> tracker is a good place to look whether a particular game has any issues.</p><h2 id="skyrim">Skyrim</h2><p>I was able to run Skyrim SE reasonably well out of the box. I did run into some issues with sound which required manual intervention. With the default <a href="https://github.com/FNA-XNA/FAudio/wiki/FAudio-for-Proton?ref=blog.jenningsga.com">FAudio</a> libraries shipped with Proton 4.2 and prior, the NPC voices do not work. This is described in the <a href="https://github.com/ValveSoftware/Proton/issues/4?ref=blog.jenningsga.com">The Elder Scrolls V: Skyrim Special Edition (489830)</a> issue. The work around is to compile a more recent version of FAudio and override what currently resides in the Proton folder.</p><!--kg-card-begin: markdown--><pre><code class="language-bash">git clone git://github.com/FNA-XNA/FAudio.git
cd FAudio; mkdir build; cd build
cmake .. -DXNASONG=OFF -DFFMPEG=ON
make -j4
cp libFAudio.so.0 ~/.local/share/Steam/SteamApps/common/Proton\ 4.2/dist/lib64/libFAudio.so.0
</code></pre>
<!--kg-card-end: markdown--><p>This needs to be done everytime Proton is updated since the custom library will be overwritten on update. With an updated FAudio library in place, I am able to hear the NPC voices and found no other problems running Skyrim.</p><h1 id="conclusion">Conclusion</h1><p>This is an exciting time to be in the Linux ecosystem. More and more companies are coming together and supporting our beloved kernel. Since it is in its infancy, Proton will have some issues. But the Linux community is great at coming together and documenting and supporting open source initiatives like this.</p>]]></content:encoded></item><item><title><![CDATA[Everything to Know About Web Security]]></title><description><![CDATA[It is becoming important to understand and implement enhanced security measures for serving web content. In this post, we will look at some of the common best practices and tools to keep your website safe from the newest vulnerabilities and attack surfaces.]]></description><link>https://blog.jenningsga.com/everything-to-know-about-web-security/</link><guid isPermaLink="false">632052e22a16870001d0a0ca</guid><category><![CDATA[HTTP]]></category><category><![CDATA[Nginx]]></category><category><![CDATA[Web]]></category><category><![CDATA[Browsers]]></category><dc:creator><![CDATA[Patrick Jennings]]></dc:creator><pubDate>Sun, 24 Feb 2019 02:52:11 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1462045504115-6c1d931f07d1?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<h1 id="introduction">Introduction</h1><img src="https://images.unsplash.com/photo-1462045504115-6c1d931f07d1?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Everything to Know About Web Security"><p>	With the growing use of cloud technologies by companies and individuals, it is becoming ever important to understand and implement enhanced security measures for serving web content. This is necessary to not only protect yourself and your companies intellectual property, but your users and their data as well. In this post, we will look at some of the common best practices and tools to keep your website safe from the newest vulnerabilities and attack surfaces. In doing so, you will enjoy peace of mind that your content is optimized to deliver the best experience possible.</p><h1 id="https-use-it-enforce-it">HTTPS - Use it, Enforce it</h1><p>	HTTPS, or HTTP over TLS, enables an encrypted connection between a client and the remote server. The security benefits are that any data transferred over the bidirectional connection will not be vulnerable to snooping or man in the middle attacks. Enabling website encryption is required for proper <a href="https://www.sslshopper.com/article-ssl-certificates-and-pci-compliance.html?ref=blog.jenningsga.com">PCI compliance</a> of a company. So depending on the data you are collecting from users, you must enable encryption to be up to code with regulations in your place of residence.</p><h3 id="it-s-never-been-easier">It&apos;s Never Been Easier</h3><p>	Setting up TLS, or SSL as known by its predecessor, has never been easier as many automated solutions exist for managing and auto-renewing certificates. For example, <a href="https://letsencrypt.org/?ref=blog.jenningsga.com">Let&apos;s Encrypt</a>, <a href="https://aws.amazon.com/certificate-manager/?ref=blog.jenningsga.com">AWS Certificate Manager</a>, and offerings from <a href="https://www.cloudflare.com/ssl/?ref=blog.jenningsga.com">Cloudflare</a> exist to help manage the creation of certificates for you automatically.</p><h3 id="seo">SEO</h3><p>	There are secondary benefits to using HTTPS as well. Google has <a href="https://security.googleblog.com/2014/08/https-as-ranking-signal_6.html?ref=blog.jenningsga.com">announced</a> that secure websites will receive higher page rank than their insecure counterparts. Your site will receive less traffic, will be trusted less, and will be referred to less compared with similar sites with end-to-end encryption enforced. As a marketing agency, your content will be diminished by not using encryption as web security is a prominent factor in determining SEO.</p><h3 id="performance-benefits">Performance Benefits</h3><p>	In addition, with the advent of HTTP2 in which all <a href="https://http2.github.io/faq/?ref=blog.jenningsga.com#does-http2-require-encryption">browsers requires SSL</a> to be enabled to use, there are now performance benefits for enabling encryption. HTTP2 offers multiplexing of connections to offer faster and more efficient load times compared with version one of HTTP.</p><h1 id="web-server-encryption">Web Server Encryption</h1><p>	Some encryption is stronger than others. As computational power increases, a need for better encryption techniques has presented itself. In this section, we will look at tools and methods for making sure your website is up to date with the latest encryption ciphers and algorithms.</p><h2 id="encryption-ciphers">Encryption Ciphers</h2><p>	<a href="https://en.wikipedia.org/wiki/Cipher_suite?ref=blog.jenningsga.com">Cipher suites</a> are the algorithms which are used during initial communication and play a vital role in the overall ranking of strength of encryption. As such, it is important to always update the libraries and processes used for web hosting, such as Nginx and OpenSSL, in order to take advantage of newer algorithms.</p><p>	A web server can decide which ciphers to allow communication using in its configuration. That being said, a balance must be struck between allowing less strong ciphers as not all browsers and platforms support the latest and greatest techniques. <a href="https://wiki.mozilla.org/Security/Server_Side_TLS?ref=blog.jenningsga.com">Security/Server Side TLS</a> by Mozilla is a great article on cipher suites and which to choose based on compatibility with different browsers and devices.</p><p>A modern profile might consider the follow:</p><!--kg-card-begin: markdown--><pre><code class="language-yaml">Ciphersuites: ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
Versions: TLSv1.2
TLS curves: prime256v1, secp384r1, secp521r1
Certificate type: ECDSA
Certificate curve: prime256v1, secp384r1, secp521r1
Certificate signature: sha256WithRSAEncryption, ecdsa-with-SHA256, ecdsa-with-SHA384, ecdsa-with-SHA512
RSA key size: 2048 (if not ecdsa)
DH Parameter size: None (disabled entirely)
ECDH Parameter size: 256
HSTS: max-age=15768000
Certificate switching: None
</code></pre>
<!--kg-card-end: markdown--><h1 id="securing-content">Securing Content</h1><p>	End-to-end encryption using TLS allows a secure channel between the user and server, but how do we ensure that the content and third party included on our website are what we expect? In this section we will look at more advanced techniques to ensure content injection and hijacking does not take place on our web services.</p><h2 id="http-headers">HTTP Headers</h2><p>	There exists a subset of HTTP headers a web administrator can use to help prevent unauthorized usage of web content. These headers are used by the browser to hint at what is to be expected on a website and any deviation of the rules should not be loaded. In this section, we will look at these headers and how they can be used to help secure your website.</p><h3 id="content-security-policy">Content Security Policy</h3><p>	The <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP?ref=blog.jenningsga.com">Content-Security-Policy</a> header can be used to prevent Cross-Site Scripting attacks. These attacks inject external scripts or pages onto your site in order to gather sensitive information from users. The CSP header allows defining where content, images, and scripts should be allowed to load from on a given webpage. In addition, a uri can be defined so that any violations will be reported to the operator.</p><!--kg-card-begin: markdown--><pre><code class="language-txt">Content-Security-Policy: default-src &apos;self&apos;; report-uri http://reportcollector.example.com/collector.cgi
</code></pre>
<!--kg-card-end: markdown--><h3 id="frame-options">Frame Options</h3><p>	The <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Frame-Options?ref=blog.jenningsga.com">X-Frame-Options</a> header is used to determine whether your webpage can be embedded in another webpage. A common phishing technique is to use an iframe to load another website within the parent. Thus making it look like another website allowing the attacker to gather sensitive information. This header will tell a browser which domains are allowed to embed or not allow it at all.</p><!--kg-card-begin: markdown--><pre><code class="language-txt">X-Frame-Options: allow-from https://example.com/
</code></pre>
<!--kg-card-end: markdown--><h3 id="xss-protection">XSS Protection</h3><p>	The <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-XSS-Protection?ref=blog.jenningsga.com">X-XSS-Protection</a> header is similar to Content Security Policy which helps prevent cross-site attacks. It determines what actions to do when an attack is found from the context of a web browser.</p><!--kg-card-begin: markdown--><pre><code class="language-txt">X-XSS-Protection: 1; mode=block
</code></pre>
<!--kg-card-end: markdown--><h3 id="referrer-policy">Referrer Policy</h3><p>	The <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referrer-Policy?ref=blog.jenningsga.com">Referrer-Policy</a> header can be used to restrict what information is sent from a browser when leaving your website. A web browser will include the full url of the website it had just visited prior to navigating to any other page on the web. With this header, you can ask to restrict the conditions and values the browser should send this information.</p><!--kg-card-begin: markdown--><pre><code class="language-txt">Referrer-Policy: no-referrer-when-downgrade
</code></pre>
<!--kg-card-end: markdown--><h3 id="online-domain-analyzers-and-tools">Online Domain Analyzers and Tools</h3><p>	The following tools are invaluable for a web hosting platform to check the ranking and correctness of the security being used.</p><ul><li>The <a href="https://www.ssllabs.com/ssltest/analyze.html?ref=blog.jenningsga.com">SSLLabs analyzer</a> offers an extensive free utility to test the encryption rating of a domain.</li><li>The <a href="https://securityheaders.com/?ref=blog.jenningsga.com">Security Headers analyzer</a> can be used to verify a domains usage of HTTP security headers.</li><li><a href="https://mozilla.github.io/server-side-tls/ssl-config-generator/?ref=blog.jenningsga.com">Mozilla SSL Configuration Generator</a> can be used to generate configuration based on several profiles of browser support for many different web servers.</li><li><a href="https://www.sslshopper.com/ssl-checker.html?ref=blog.jenningsga.com">SSLShopper ssl checker</a> is a utility which checks the correctness of ordering of a domains certificate chain.</li><li><a href="https://crt.sh/?ref=blog.jenningsga.com"><a href="https://crt.sh/?ref=blog.jenningsga.com">Sectigo</a> crt.sh</a> provides a way of historically searching for certificates provided by certificate authorities.</li></ul><h2 id="subresource-integrity">Subresource Integrity</h2><p>The final security attribute we will talk about today is the <a href="https://www.w3.org/TR/SRI/?ref=blog.jenningsga.com">Subresource Integrity</a>. This HTML attribute allows us to provide a hash on certain HTML elements, such as script and style tags. Doing so will tell the browser that the content loaded from the resource should match what is expected on the page. Any deviation will prevent the browser from including the content on the page thus adding an additional layer of protection for our users.</p><!--kg-card-begin: markdown--><pre><code class="language-html">&lt;script src=&quot;https://example.com/example-framework.js&quot;
        integrity=&quot;sha384-Li9vy3DqF8tnTXuiaAJuML3ky+er10rcgNR/VqsVpcw+ThHmYcwiB1pbOxEbzJr7&quot;
        crossorigin=&quot;anonymous&quot;&gt;&lt;/script&gt;
</code></pre>
<!--kg-card-end: markdown--><h1 id="conclusion">Conclusion</h1><p>	Hopefully now we have learned how to make our website more secure and our users safer. There is a sense in pride when using the above tools and all of the validation tests have turned green showing our success. Even if our users are none the wiser, we can pat ourselves on the back knowing that we are taking the right steps to making the internet a safer place.</p>]]></content:encoded></item><item><title><![CDATA[Deep Dive into Requests]]></title><description><![CDATA[One of the most popular libraries in the Python ecosystem is the infamous requests library. In this post, we will look at basic and more advanced usages of the library which will help write concise pythonic code.]]></description><link>https://blog.jenningsga.com/deep-dive-into-requests/</link><guid isPermaLink="false">632052e22a16870001d0a0d2</guid><category><![CDATA[Python]]></category><category><![CDATA[HTTP]]></category><dc:creator><![CDATA[Patrick Jennings]]></dc:creator><pubDate>Tue, 19 Feb 2019 04:37:54 GMT</pubDate><media:content url="https://blog.jenningsga.com/content/images/2020/01/requests_cover.jpg" medium="image"/><content:encoded><![CDATA[<h1 id="introduction">Introduction</h1><img src="https://blog.jenningsga.com/content/images/2020/01/requests_cover.jpg" alt="Deep Dive into Requests"><p>One of the most popular libraries in the Python ecosystem is the infamous r<a href="https://github.com/kennethreitz/requests?ref=blog.jenningsga.com">equests</a> library. Requests is used primarily for creating, sending, and parsing HTTP requests and responses. Requests tag line is &quot;HTTP for Humans&quot; which is very appropriate for this easy to use and wholesome project.</p><p>You will find many clients written in Python utilizing this library for communication with HTTP RESTful API services. In this post, we will look at the library and talk about some common scenarios when dealing with web APIs and how to solve them cleanly with the library. We will look at basic and more advanced usages of the library which will help you write concise pythonic code.</p><h1 id="the-basics">The Basics</h1><p>The top level module exposes <a href="http://docs.python-requests.org/en/master/api/?ref=blog.jenningsga.com#requests.get">several functions</a> which correspond to the standard set of HTTP verbs.</p><!--kg-card-begin: markdown--><pre><code class="language-python">import requests
result = requests.get(&apos;https://httpbin.org/get&apos;, params={&apos;search&apos;:&apos;this is a search term&apos;})
assert &apos;search&apos; in result.json()[&apos;args&apos;]
</code></pre>
<!--kg-card-end: markdown--><p></p><p>This is the most basic functionality of the library in which you can send HTTP requests to a given URL and a HTTP response object is returned with attributes such as the status code, headers, cookies, and response data easily accessible.</p><p>If your code only utilizes these functions, you will want to stay tuned for the more advanced usages as they can really step up your HTTP game.</p><h1 id="the-session">The Session</h1><p>When accessing a standardized web API, it is generally advised to use a <a href="http://docs.python-requests.org/en/master/user/advanced/?highlight=session&amp;ref=blog.jenningsga.com#session-objects">Session</a> context object. With a session created, all subsequent requests called from within the session benefit from HTTP persistence as well as shared headers and cookies among other functionality.</p><p>For example, if an API requires all requests to have a certain HTTP header set, a session will allow you to define this default behaviour.</p><!--kg-card-begin: markdown--><pre><code class="language-python">from requests import Session

session = Session()
session.headers.update({&apos;X-Custom-Header&apos;: &apos;any value&apos;})

result = session.get(&apos;https://httpbin.org/headers&apos;)
data = result.json()

assert &apos;X-Custom-Header&apos; in data[&apos;headers&apos;]
assert data[&apos;headers&apos;][&apos;X-Custom-Header&apos;] == &apos;any value&apos;
</code></pre>
<!--kg-card-end: markdown--><h1 id="event-hooks">Event Hooks</h1><p>Another important, but often overlooked functionality within requests, is the ability to register <a href="http://docs.python-requests.org/en/master/user/advanced/?highlight=session&amp;ref=blog.jenningsga.com#event-hooks">event <a href="http://docs.python-requests.org/en/master/user/advanced/?highlight=session&amp;ref=blog.jenningsga.com#event-hooks">hooks</a></a>. With event hooks, we can transform the HTTP response and its data before it reaches our client code.</p><p>Also, event hooks allows us to write common error handling routines in a much simpler fashion. For example, we can catch certain status codes in an event hook and handle the error within. This is a really powerful concept since the validation will automatically be applicable to all requests sent from the session.</p><!--kg-card-begin: markdown--><pre><code class="language-python">import requests

def forbidden_error_handler(response, *args, **kwargs):
    if response.status_code == requests.codes.forbidden:
        raise Exception(&apos;Raise some custom exception&apos;)

session = requests.Session()
session.hooks[&apos;response&apos;].append(forbidden_error_handler)

session.get(&apos;https://httpbin.org/status/{}&apos;.format(requests.codes.forbidden))
</code></pre>
<!--kg-card-end: markdown--><h1 id="authentication">Authentication</h1><p>There are several protocols for authenticating with HTTP services using requests.</p><p><a href="http://docs.python-requests.org/en/master/api/?ref=blog.jenningsga.com#requests.auth.HTTPBasicAuth">Basic auth</a> is a standardized authentication method supported natively in which an encoded value, given a username and password, is set in the headers of the requests.</p><!--kg-card-begin: markdown--><pre><code class="language-python">from requests import Session
from requests.auth import HTTPBasicAuth

session = Session()
session.auth = HTTPBasicAuth(&apos;username&apos;, &apos;password&apos;)

result = session.get(&apos;https://httpbin.org/headers&apos;)
result_data = result.json()
assert &apos;Authorization&apos; in result_data[&apos;headers&apos;]
assert result_data[&apos;headers&apos;][&apos;Authorization&apos;].startswith(&apos;Basic&apos;)
</code></pre>
<!--kg-card-end: markdown--><p></p><p>After registering the authentication method, every request will initiate the callable with the request. The request is then modified with the necessary authentication before transport.</p><h2 id="token-based-authentication">Token Based Authentication</h2><p>Some APIs require registering with a specific endpoint to obtain an authentication token. This token is used for subsequent calls to the API. An example of this can be found in the Django REST framework in <a href="https://www.django-rest-framework.org/api-guide/authentication/?ref=blog.jenningsga.com#generating-tokens">TokenAuthentication</a>.</p><p>If this token has a time to live, you may need to write your own authentication. For custom authentication, the <a href="http://docs.python-requests.org/en/master/api/?ref=blog.jenningsga.com#requests.auth.AuthBase">AuthBase</a> can be subclassed directly to define your own authentication.</p><p>A good example to consider is the <a href="http://docs.python-requests.org/en/master/_modules/requests/auth/?ref=blog.jenningsga.com#HTTPDigestAuth">HTTPDigestAuth</a> class. This authentication class registers authentication specific event hooks prior to sending the request. The hooks determine if the status code of the response is that of one missing authentication. If so, an auth token is generated and the request is re-sent.</p><p>Unfortunately, this particular use case requires a good bit of logic due to the complexity of request data that can be sent with the library. For example, it must roll back iterables and file descriptors before re-sending the request. Based on a particular use case, you may be able to trim down your custom authentication class considerably.</p><h1 id="transport-adapters">Transport Adapters</h1><p>Not for the faint of heart, <a href="http://docs.python-requests.org/en/latest/user/advanced/?ref=blog.jenningsga.com#transport-adapters">Transport Adapters</a> are used for modifying the underlying connection engine within the requests library. The adapters can be mounted to a session in order to supply specific functionality to a particular set of protocols, domains, or routes.</p><p>One common use case for supplying an adaptor is for automatic retry logic. The <a href="https://urllib3.readthedocs.io/en/latest/reference/urllib3.util.html?ref=blog.jenningsga.com#urllib3.util.retry.Retry">Retry</a> class from urllib3 can be used to specify this information.</p><!--kg-card-begin: markdown--><pre><code class="language-python">import requests
from urllib3.util.retry import Retry
from requests.adapters import HTTPAdapter

retry_policy = Retry(3, status_forcelist=[requests.codes.server_error])
adapter = HTTPAdapter(max_retries=retry_policy)

session = requests.Session()
session.mount(&apos;https://httpbin.org/status/&apos;, adapter)

url = &apos;https://httpbin.org/status/{}&apos;.format(requests.codes.server_error)

try:
    result = session.get(url)
except requests.exceptions.RetryError as e:
    print(&apos;Retries exceeded. Success!&apos;)
</code></pre>
<!--kg-card-end: markdown--><h1 id="conclusion">Conclusion</h1><p>As can be seen, Requests has much more functionality than meets the eye. I hope this deep dive has given you an idea of how you can improve your uses of the library to write more clean and modular code.</p>]]></content:encoded></item></channel></rss>