What type of memory allocation is referred for linked lists

Dynamic memory allocation

Linked lists are inherently dynamic data structures; they rely on new and delete (or malloc and free) for their operation. Normally, dynamic memory management is provided by the C/C++ standard library, with help from the operating system. However, nothing stops us from writing our own allocator, providing the same services as malloc and free.

Differences between C++ and C

In C++, to dynamically allocate memory, we write newtype or newtype[size] for an array, and we get back a pointer, a type*. Internally, both of these boil down to a request for a contiguous chunk of memory of a certain size (sizeof(type) * size). In C, this is provided directly: malloc(bytes) takes a number of bytes as a parameter and returns a void* (a generic pointer) to a contiguous region of memory. Similarly, when we do delete p we are passing the pointer p to the allocator to inform it that the memory is not needed. In C, this is done via free(p). I’ll use the C versions, as they are more low level.

In C++, most standard library classes which need dynamic memory (string, vector, etc.) take a compile-time allocator parameter, which allows a different allocator to be “plugged in” to the class. In C, the standard allocator can only be replaced globally, by linking with a different allocator library.

Restrictions

When we’re writing an allocator, we must bear in mind that we are responsible for handling dynamic memory requests. This means that the allocator code should itself try to be as efficient in terms of memory as possible, and of course, the allocator cannot use any dynamic memory! Thus, although they would be useful, normal implementations of things like linked lists, trees, etc. are not possible in an allocator. We will see that something like linked lists can be used, with some care as to where the “nodes” are placed.

Fragmentation

Fragmentation refers to the tendency of the heap to go from one large available block at the beginning to many small available blocks. It may be possible that, while the total amount of memory is larger than the amount requested, no single block is big enough to satisfy it. For example, suppose we have a heap of size 20, and the following memory requests:

void* a = malloc(5); void* b = malloc(5); void* c = malloc(5); void* d = malloc(5); free(b); free(d);

There are theoretically 10 bytes of memory available, but the largest malloc we can handle is of size 5! This particular example is called external fragmentation, fragmentation where the available memory is broken into many small blocks rather than a few large ones.. There’s relatively little an allocator can do about external fragmentation, partly because we are not allowed to move existing blocks (this would require finding every pointer into that block and updating it to the block’s new location, something that is not possible in either C or C++).

(Internal fragmentation occurs when an allocator reserves more space per-block than is actually requested.)

Basic allocator functionality

The allocator maintains some internal state to keep track of free/in-use memory and responds to two kinds of requests:

  • malloc(n) – A request for n bytes of contiguous memory. The allocator should return a pointer to such a block, if it has one available, or a null pointer if it does not. The block of memory should be marked as “in-use” in the allocators internal state. (Secure allocators might zero memory before giving it over, to ensure that existing data is not “leaked”.)

  • free(p) – An offer to release the memory pointed to by p. The allocator should mark the memory used by p as not in-use. (Error-checking allocators might fill the memory with a recognizable byte pattern so that use-after-free errors are easy to detect.)

We assume that once a given region has been given to us (as a pointer returned from malloc) the allocator will not later give us a pointer to the same region; that is, blocks of memory returned by malloc are required to not overlap each other. Similarly, when we release a block with free, we expect that only that block will be affected. The allocator has access to a large region of memory (the heap section), which it may be able to expand via operating system request, but which is otherwise reserved entirely for the allocator’s use.

No-free allocation

The simplest possible allocator is one where we simply add each allocated block of memory immediately after the end of the previous allocation. That is, if we have the following three requests:

void* p1 = malloc(12); void* p2 = malloc(3); void* p3 = malloc(7);

then we have the following memory layout:

BlockAddresses
p1p2p3
0-1112-1415-21

When memory is free-d, we do nothing. Allocated memory is never released, until the program ends. The reason why we make this choice is that keeping track of which blocks of memory are in-use and which are not requires a fair bit of complexity, so we avoid it by not doing it at all.

This allocator’s internal state is simply a pointer fp to the next free address (e.g., in the above, this would point to address 22). The implementation of malloc is thus:

void* fp = 0; // Starting address void* malloc(size_t n) { fp += n; // Reserve n bytes return fp - n; // Start of block }

while free is just

void free(void* p) { // Does nothing }

The only advantages to this allocator are that both allocation and deallocation run in \(O(1)\) time. Despite its simplicity, this “never-free” allocator can be quite useful for structures which are partially dynamic, but which are created once, at program startup, and then not freed until program end.

Note that while we can’t free individual blocks, we can free the entire heap, all at once, and revert to a state where nothing has been allocated.

Stack-based allocation

Looking at the previous allocator, we can see that there always one block of memory which we could free easily: the last one. If a free is received for the most-recently allocated block, we can deallocate it by simply decrementing fp by the size of the block. Note that we do not above store the size of the blocks anywhere, so we will need to add some extra information. At the end of each block, just after the last byte, we will write a pointer to the beginning of the block, and similarly, before the beginning of the block, we will write a pointer to the end. This allows us to navigate through the blocks as if they were a doubly-linked list.

Each block now looks like this:

PurposeAddress
SizeBlockStart
p-8pp + size

Given a pointer to a block, we can find its starting record (containing its size) by simply going back 8 bytes (1 qword), and then we can find its ending record by adding the size to the pointer.

Malloc must be implemented to maintain these structures:

void* fp = 0; void* malloc(size_t n) { void* block = fp + 8; // Address of block *fp = n; // Write size record *(block + n) = fp; // Write start record fp += 8 + n; return block; }

free looks at the block being released and checks to see if it is the last block allocated; this will be the case if fp points to the address immediately after the start record of the block.

void free(void* p) { size_t n = *(p - 8); if(p + n == fp) { // Last block fp = p - 8; } else { // Not last block // Do nothing. } }

This allocator can free all the memory allocated, provided that calls to free arrive in the opposite of the order of their corresponding mallocs. If this order is not maintained, then some memory will be lost.

With a simple modification, we can allow out-of-order frees to at least partially free their memory. We will add a one-bit flag to the size record at the beginning of every block set to 0 if the block is unused, or 1 if it is used. (Although pointers are 64-bit on x86-64, the system actually only supports addresses up to 40 bits in length; thus, the high 24 bits of every pointer are available for us to use for other purposes.) When we free a block, we perform the following steps:

  • If it is not the last block then we simply mark it as unused (set its in-use flag to 0).

  • If it is the last block, then we free it as above, and then we repeatedly check for not-in-use blocks before it, and free those as well.

Thus, a chain of not-in-use blocks at the end of the heap will all be freed at once. Note that we still do not reuse the memory of not-in-use blocks, unless they are “really” freed. A block’s memory is only really freed if every block after it has also been freed.

A general-purpose allocator

Using the idea of blocks arranged into a linked-list-like structure, with a flag on each block for “in-use” or not, we can generalize the above to a general-purpose allocator which correctly frees memory on every free, not just those that are in the right order. This is called a free list.

Initially, we set things up so that the entire heap is treated as a single entry in the free-list, marked as not in use. Because we’re not treating the list in a stack-like manner, there’s no need for the “backwards” pointers, so each block has a small record before the beginning which consists of a pointer to the next free block. Because the list only contains free blocks, there’s no need to record whether an entry is free or in-use.

To malloc a block, we search through the list for a block whose size is at least as big as requested. Of all these blocks, we have a choice as to which one to use, determined by the allocation strategy:

  • First Fit: use the first block in the list which fits

  • Best Fit: use the block in the list whose size is closest to the requested size.

  • Use the most/least recently free-d block (LIFO, FIFO).

If the size of the block chosen is larger than the requested size, then we split the block; part of it becomes used, and the remainder is still free. If the size of the block exactly matches the requested size, then the block is removed from the free list entirely.

When a block is freed, we add it to the list. This can be done in several ways:

  • Add to the front of the list. (When used with first-fit, this results in LIFO-like behavior).

  • Add to the back of the list. (FIFO)

  • Insert it into the list so that the list is always ordered by address. This is most common, as the structure of the list mirrors the structure of the memory layout.

Insert-in-memory order has the advantage that it makes it relatively easy to coalesce free blocks which are physically adjacent to each other. E.g., if the free list contains an entry for a block 10-19, and an entry for a block 20-49, these should be removed and replaced with a single entry 10-49. This is easiest to detect if the blocks in the list are address-ordered.

Eager coalescing refers to coalescing done during free. Deferred coalescing is done later, either during a malloc (if no block of sufficient size is found), or perhaps during a later free.

The main disadvantage to this allocator is speed: requests for memory must traverse the list, making them run in \(O(m)\) time, where r is the number of malloc calls so far. Calls to free run in \(O(1)\), as in the worst case, only three blocks will need to be examined and coalesced.

Fixed-size allocator

If we know that every allocation request will be for a block of the same size (or we know that they will all be ≤ some maximum size, and we don’t mind wasting space), we can build a specialized version of any of the above allocators where the block size is hard-coded. We can avoid storing the block size in the records around each block, so these records become either much smaller or, in some cases, non-existent. (E.g., finding the next/previous block is easy if they all have the same size.) A fixed-size allocator is sometimes called an object pool (or memory pool), because a typical use of it is storing objects all of the same type (and hence, having the same size).

An object pool also has the advantage that an allocation for k objects takes the same amount of time as for 1 object, again, because the size of the objects is fixed.

Some allocators reserve a section of the heap for “small” object allocations, and use a fixed-size allocator only in this region, while using a more general method for the rest of the heap.

It’s also possible to reserve several regions, each dedicated to a particular size; if a free space cannot be found in a smaller-sized region, we proceed to search the next larger region.

Data Structure

A data structure is structuring and organizing data in a simple way so that data can be easily manipulated and managed. One example is accessing and inserting data in file systems and databases. The type of data structure to be used depends on various factors, such as the type of data, data access method e.g., serial access or random access, frequency of access, etc.

How is memory allocated for a linked list?

Similarly, linked list is considered a data structure for which size is not fixed and memory is allocated from Heap section (e.g. using malloc() etc.) A linked list with 4 nodes where each node has integer as data and these data are initialized with 1, 2, 3 and 4.

Which type of memory allocation is referred for linked list in C?

Linked lists are inherently dynamic data structures; they rely on new and delete (or malloc and free ) for their operation. Normally, dynamic memory management is provided by the C/C++ standard library, with help from the operating system.

A Programmer’s approach of looking at Array vs. Linked List

In general, the array is considered a data structure for which size is fixed at the compile-time and array memory is allocated either from the Data section (e.g. global array) or Stack section (e.g. local array).
Similarly, a linked list is considered a data structure for which size is not fixed and memory is allocated from the Heap section (e.g. using malloc(), etc.) as and when needed. In this sense, the array is taken as a static data structure (residing in Data or Stack section) while the linked list is taken as a dynamic data structure (residing in the Heap section). Memory representation of array and the linked list can be visualized as follows:

An array of 4 elements (integer type) have been initialized with 1, 2, 3, and 4. Suppose, these elements are allocated at memory addresses 0x100, 0x104, 0x108 and 0x10C respectively.

[(1)] [(2)] [(3)] [(4)] 0x100 0x104 0x108 0x10C

A linked list with 4 nodes where each node has integer as data and these data are initialized with 1, 2, 3, and 4. Suppose, these nodes are allocated via malloc() and memory allocated for them is 0x200, 0x308, 0x404 and 0x20B respectively.

[(1), 0x308] [(2),0x404] [(3),0x20B] [(4),NULL] 0x200 0x308 0x404 0x20B

Anyone with even little understanding of array and linked-list might not be interested in the above explanation. I mean, it is well known that the array elements are allocated memory in sequence i.e. contiguous memory while nodes of a linked list are non-contiguous in memory. Though it sounds trivial yet this is the most important difference between array and linked list. It should be noted that due to this contiguous versus non-contiguous memory, array and linked list are different. In fact, this difference is what makes array vs. linked list! In the following sections, we will try to explore this very idea further.

Since elements of an array are contiguous in memory, we can access any element randomly using an index e.g. intArr[3] will directly access the fourth element of the array. (For newbies, array indexing starts from 0 and that’s why the fourth element is indexed with 3). Also, due to contiguous memory for successive elements in the array, no extra information is needed to be stored in individual elements i.e. no overhead of metadata in arrays. Contrary to this, linked list nodes are non-contiguous in memory. It means that we need some mechanism to traverse or access linked list nodes. To achieve this, each node stores the location of the next node and this forms the basis of the link from one node to the next node. Therefore, it’s called a Linked list. Though storing the location of the next node is overhead in the linked list but it’s required. Typically, we see the linked list node declaration as follows:



C




struct llNode
{
int dataInt;
/* nextNode is the pointer to next node in linked list*/
struct llNode * nextNode;
};

So array elements are contiguous in memory and therefore do not require any metadata. And linked list nodes are non-contiguous in memory thereby requiring metadata in the form of the location of the next node. Apart from this difference, we can see that the array could have several unused elements because memory has already been allocated. But the linked list will have only the required no. of data items. All the above information about array and the linked list has been mentioned in several textbooks though in different ways.

What if we need to allocate array memory from the Heap section (i.e. at run time) and linked list memory from Data/Stack section. First of all, is it possible? Before that, one might ask why would someone need to do this? Now, I hope that the remaining article would make you rethink the idea of array vs. linked-list 🙂

Now consider the case when we need to store certain data in an array (because the array has the property of random access due to contiguous memory) but we don’t know the total size apriori. One possibility is to allocate memory of this array from Heap at run time. For example, as follows:

/*At run-time, suppose we know the required size for integer array (e.g. input size from the user). Say, the array size is stored in variable arrSize. Allocate this array from Heap as follows*/

C




int * dynArr = (int *)malloc(sizeof(int)*arrSize);

Though the memory of this array is allocated from Heap, the elements can still be accessed via the index mechanism e.g. dynArr[i]. Basically, based on the programming problem, we have combined one benefit of the array (i.e. random access of elements) and one benefit of the linked list (i.e. delaying the memory allocation till run time and allocating memory from Heap). Another advantage of having this type of dynamic array is that this method of allocating array from Heap at run time could reduce code size (of course, it depends on certain other factors e.g. program format, etc.)

Now consider the case when we need to store data in a linked list (because no. of nodes in a linked list would be equal to actual data items stored i.e. no extra space like an array) but we aren’t allowed to get this memory from Heap again and again for each node. This might look hypothetical situation to some folks but it’s not a very uncommon requirement in embedded systems. Basically, in several embedded programs, allocating memory via malloc(), etc. isn’t allowed due to multiple reasons. One obvious reason is performance i.e. allocating memory via malloc() is costly in terms of time complexity because your embedded program is required to be deterministic most of the time. Another reason could be module-specific memory management i.e. it’s possible that each module in the embedded system manages its own memory. In short, if we need to perform our own memory management, instead of relying on the system-provided APIs of malloc() and free(), we might choose the linked list which is simulated using an array. I hope that you got some idea why we might need to simulate the linked list using an array. Now, let us first see how this can be done. Suppose, type of a node in the linked list (i.e. the underlying array) is declared as follows:

C




struct sllNode
{
int dataInt;
/*Here, note that nextIndex stores the location of next node in
linked list*/
int nextIndex;
};
struct sllNode arrayLL[5];

If we initialize this linked list (which is actually an array), it would look as follows in memory:

[(0),-1] [(0),-1] [(0),-1] [(0),-1] [(0),-1] 0x500 0x508 0x510 0x518 0x520

The important thing to notice is that all the nodes of the linked list are contiguous in memory (each one occupying 8 bytes) and the next index of each node is set to -1. This (i.e. -1) is done to denote that each node of the linked list is empty as of now. This linked list is denoted by head index 0.

Now, if this linked list is updated with four elements of data parts 4, 3, 2, and 1 successively, it would look as follows in memory. This linked list can be viewed as 0x500 -> 0x508 -> 0x510 -> 0x518.

[(1),1] [(2),2] [(3),3] [(4),-2] [(0),-1] 0x500 0x508 0x510 0x518 0x520

The important thing to notice is next index of the last node (i.e. fourth node) is set to -2. This (i.e. -2) is done to denote the end of the linked list. Also, the head node of the linked list is index 0. This concept of simulating linked lists using an array would look more interesting if we delete say second node from the above-linked list. In that case, the linked list will look as follows in memory:

[(1),2] [(0),-1] [(3),3] [(4),-2] [(0),-1] 0x500 0x508 0x510 0x518 0x520

The resultant linked list is 0x500 -> 0x510 -> 0x518. Here, it should be noted that even though we have deleted the second node from our linked list, the memory for this node is still there because the underlying array is still there. But the next index of the first node now points to the third node (for which index is 2).

Hopefully, the above examples would have given some idea that for the simulated linked list, we need to write our own API similar to malloc() and free() which would basically be used to insert and delete a node. Now, this is what’s called own memory management. Let us see how this can be done in an algorithmic manner.

There are multiple ways to do so. If we take the simplistic approach of creating a linked list using an array, we can use the following logic. For inserting a node, traverse the underlying array and find a node whose next index is -1. It means that this node is empty. Use this node as a new node. Update the data part in this new node and set the next index of this node to the current head node (i.e. head index) of the linked list. Finally, make the index of this new node as the head index of the linked list. To visualize it, let us take an example. Suppose the linked list is as follows where head Index is 0 i.e. linked list is 0x500 -> 0x508 -> 0x518 -> 0x520

[(1),1] [(2),3] [(0),-1] [(4),4] [(5),-2] 0x500 0x508 0x510 0x518 0x520

After inserting a new node with data 8, the linked list would look as follows with head index as 2.

[(1),1] [(2),3] [(8),0] [(4),4] [(5),-2] 0x500 0x508 0x510 0x518 0x520

So the linked list nodes would be at addresses 0x510 -> 0x500 -> 0x508 -> 0x518 -> 0x520

For deleting a node, we need to set the next index of the node as -1 so that the node is marked as the empty node. But, before doing so, we need to make sure that the next index of the previous node is updated correctly to the index of the next node of this node to be deleted. We can see that we have done our own memory management for creating a linked list out of the array memory. But, this is one way of inserting and deleting nodes in this linked list. It can be easily noticed that finding an empty node is not so efficient in terms of time complexity. Basically, we’re searching the complete array linearly to find an empty node.

Let us see if we can optimize it further. Basically, we can maintain a linked list of empty nodes as well in the same array. In that case, the linked list would be denoted by two indexes – one index would be for the linked list which has the actual data values i.e. nodes that have been inserted so far, and other indexes for a linked list of empty nodes. By doing so, whenever, we need to insert a new node in the existing linked list, we can quickly find an empty node. Let us take an example:

[(4),2] [(0),3] [(5),5] [(0),-1] [(0),1] [(9),-1] 0x500 0x508 0x510 0x518 0x520 0x528

The above-linked list which is represented using two indexes (0 and 5) has two linked lists: one for actual values and another for empty nodes. The linked list with actual values has nodes at address 0x500 -> 0x510 -> 0x528 while the linked list with empty nodes has nodes at addresses 0x520 -> 0x508 -> 0x518. It can be seen that finding an empty node (i.e. writing own API similar to malloc()) should be relatively faster now because we can quickly find a free node. In real-world embedded programs, a fixed chunk of memory (normally called memory pool) is allocated using malloc() only once by a module. And then the management of this memory pool (which is basically an array) is done by that module itself using techniques mentioned earlier. Sometimes, there are multiple memory pools each one having a different size of a node. Of course, there are several other aspects of our own memory management but we’ll leave it here itself. But it’s worth mentioning that there are several methods by which the insertion (which requires our own memory allocation) and deletion (which requires our own memory freeing) can be improved further.

If we look carefully, it can be noticed that the Heap section of memory is basically a big array of bytes that is being managed by the underlying operating system (OS). And OS is providing this memory management service to programmers via malloc(), free(), etc. Aha !!

The important takeaways from this article can be summed as follows:

A) Array means contiguous memory. It can exist in any memory section be it Data or Stack or Heap.
B) Linked List means non-contiguous linked memory. It can exist in any memory section be it Heap or Data or Stack.
C) As a programmer, looking at a data structure from a memory perspective could provide us with better insight in choosing a particular data structure or even designing a new data structure. For example, we might create an array of linked lists, etc.

Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above




Article Tags :
Linked List
Practice Tags :
Linked List
Read Full Article

Why Linked List is implemented on Heap memory rather than Stack memory?

Pre-requisite:

Linked List Data Structure
Stack vsHeap Memory Allocation

The Linked List is a linear data structure, in which the elements are not stored at contiguous memory locations. The elements in a linked list are linked using pointers. It is implemented on the heap memory rather than the stack memory. This article discusses the reason behind it.

Stack vs Heap Memory

The computer’s memory is divided into heap and stack memory segments. Stack memory segment is relatively small so it is mainly used for operations like recursion or control jump statements like goto statement. Due to the small size of the stack segment, it is not used for deep recursion levels as it may throw a stack overflow error.



On the other hand, a linked list has the major advantage of dynamically expanding itself when needed without worrying about memory consumption. This is the reason why heap is preferred for storing linked list-object.

Why linked list is not stored in stack memory?

The question of why one cannot make a linked list with stack memory is due to the scope rules and automatic memory management on the stack. The basic idea of a linked list is to link nodes together based on their memory address. If each node is created on the stack then those nodes will be deleted after they go out of scope, so even if the user keeps pointers to their memory address it is not safe to assume that they won’t be overwritten by something else.

The linked list can be implemented on the stack memory as well. Below is the C++ implementation of creating a linked list in stack memory:

C++




// C++ program to implement
// linked list in stack
#include <iostream>
using namespace std;
// Structure of the linked list
struct Node {
int data;
struct Node* next;
// Constructor
Node(int x)
{
data = x;
next = NULL;
}
}* head = NULL;
// Function to print the linked list
void printLinkedList(Node* head)
{
struct Node* temp = head;
// Traversing linked list
while (temp) {
cout << temp->data << " ";
temp = temp->next;
}
}
// Driver code
int main()
{
// Creation of linked list in stack
struct Node first = Node(1);
struct Node second = Node(2);
struct Node third = Node(3);
struct Node fourth = Node(4);
head = &first;
// 1 -> 2 -> 3 -> 4
first.next = &second;
second.next = &third;
third.next = &fourth;
fourth.next = NULL;
// Printing the elements of
// a linked list
printLinkedList(head);
return 0;
}
Java




// Java program to implement
// linked list in stack
import java.util.*;
class GFG{
// Structure of the linked list
static class Node {
int data;
Node next;
// Constructor
Node(int x)
{
data = x;
next = null;
}
};
static Node head = null;
// Function to print the linked list
static void printLinkedList(Node head)
{
Node temp = head;
// Traversing linked list
while (temp!=null) {
System.out.print(temp.data+ " ");
temp = temp.next;
}
}
// Driver code
public static void main(String[] args)
{
// Creation of linked list in stack
Node first = new Node(1);
Node second = new Node(2);
Node third = new Node(3);
Node fourth = new Node(4);
head = first;
// 1.2.3.4
first.next = second;
second.next = third;
third.next = fourth;
fourth.next = null;
// Printing the elements of
// a linked list
printLinkedList(head);
}
}
// This code contributed by shikhasingrajput
C#




// C# program to implement
// linked list in stack
using System;
using System.Collections.Generic;
public class GFG{
// Structure of the linked list
class Node {
public int data;
public Node next;
// Constructor
public Node(int x)
{
data = x;
next = null;
}
};
static Node head = null;
// Function to print the linked list
static void printList(Node head)
{
Node temp = head;
// Traversing linked list
while (temp!=null) {
Console.Write(temp.data+ " ");
temp = temp.next;
}
}
// Driver code
public static void Main(String[] args)
{
// Creation of linked list in stack
Node first = new Node(1);
Node second = new Node(2);
Node third = new Node(3);
Node fourth = new Node(4);
head = first;
// 1.2.3.4
first.next = second;
second.next = third;
third.next = fourth;
fourth.next = null;
// Printing the elements of
// a linked list
printList(head);
}
}
// This code is contributed by shikhasingrajput
Javascript




<script>
// Javascript program to implement
// linked list in stack
// Structure of the linked list
class Node {
// Constructor
constructor(x) {
this.data = x;
this.next = null;
}
}
// Function to print the linked list
function printLinkedList(head) {
let temp = head;
// Traversing linked list
while (temp) {
document.write(temp.data + " ");
temp = temp.next;
}
}
// Driver code
// Creation of linked list in stack
let first = new Node(1);
let second = new Node(2);
let third = new Node(3);
let fourth = new Node(4);
head = first;
// 1 -> 2 -> 3 -> 4
first.next = second;
second.next = third;
third.next = fourth;
fourth.next = null;
// Printing the elements of
// a linked list
printLinkedList(head);
// This code is contributed by gfgking.
</script>

Output:

1 2 3 4

Condition when linked list implementation on the stack will not work:

The above code won’t work if we write the logic of the creation of a linked list in a separate function and the logic of printing the elements of the linked list in a separate function. Below is its C++ implementation of the above concept:

C++




// C++ program to implement
// the above approach
#include <iostream>
using namespace std;
// Structure of the linked list
struct Node {
int data;
struct Node* next;
Node(int x)
{
data = x;
next = NULL;
}
}* head = NULL;
// Function to return the head pointer
// of the created linked list
Node* CreateLinkedList()
{
struct Node first = Node(1);
struct Node second = Node(2);
struct Node third = Node(3);
struct Node fourth = Node(4);
head = &first;
// 1->2->3->4
first.next = &second;
second.next = &third;
third.next = &fourth;
fourth.next = NULL;
return head;
}
// Function to print the linked list
void printLinkedList(Node* head)
{
struct Node* temp = head;
// Traversing linked list
while (temp) {
cout << temp->data << " ";
temp = temp->next;
}
}
// Driver Code
int main()
{
struct Node* head = CreateLinkedList();
printLinkedList(head);
return 0;
}
Java




// Java program to implement
// the above approach
import java.util.*;
class GFG{
// Structure of the linked list
static class Node {
int data;
Node next;
Node(int x)
{
data = x;
next = null;
}
}
static Node head = null;
// Function to return the head pointer
// of the created linked list
static Node CreateLinkedList()
{
Node first = new Node(1);
Node second = new Node(2);
Node third = new Node(3);
Node fourth = new Node(4);
head = first;
// 1.2.3.4
first.next = second;
second.next = third;
third.next = fourth;
fourth.next = null;
return head;
}
// Function to print the linked list
static void printLinkedList(Node head)
{
Node temp = head;
// Traversing linked list
while (temp!=null) {
System.out.print(temp.data+ " ");
temp = temp.next;
}
}
// Driver Code
public static void main(String[] args)
{
Node head = CreateLinkedList();
printLinkedList(head);
}
}
// This code is contributed by 29AjayKumar
C#




// C# program to implement
// the above approach
using System;
public class GFG{
// Structure of the linked list
class Node {
public int data;
public Node next;
public Node(int x)
{
data = x;
next = null;
}
}
static Node head = null;
// Function to return the head pointer
// of the created linked list
static Node CreateList()
{
Node first = new Node(1);
Node second = new Node(2);
Node third = new Node(3);
Node fourth = new Node(4);
head = first;
// 1.2.3.4
first.next = second;
second.next = third;
third.next = fourth;
fourth.next = null;
return head;
}
// Function to print the linked list
static void printList(Node head)
{
Node temp = head;
// Traversing linked list
while (temp != null) {
Console.Write(temp.data + " ");
temp = temp.next;
}
}
// Driver Code
public static void Main(String[] args)
{
Node head = CreateList();
printList(head);
}
}
// This code is contributed by shikhasingrajput

Explanation:

The above code gives the verdict as SEGMENTATION FAULT. The reason behind it is that during the creation of the linked list in the stack memory, all the objects created by a function will be disappeared as the stack frame is popped whenever the function ends or returns. Refer to this article to know the reason behind the popping of stack frame whenever function ends.

Why linked list is stored in heap memory?

In a linked list, when there is a need to store more data, we can allocate the memory at run-time by using malloc or a new function in C/ C++. So dynamic memory allocation reserve size from heap area, therefore, a linked list is stored in heap memory.
If there is a need to store linked lists in the stack area then implement linked lists without using malloc or a new function.

Below is the C++ program to show how to implement a linked list in heap memory without throwing segmentation fault error:

C++




// C++ program to implement
// the above approach
#include <iostream>
using namespace std;
// Structure of the linked list
struct Node {
int data;
struct Node* next;
};
struct Node* head = NULL;
// Function to create nodes of
// the linked list
void CreateLinkedList(int new_data)
{
struct Node* new_node = (struct Node*)malloc(sizeof(struct Node));
new_node->data = new_data;
new_node->next = head;
head = new_node;
}
// Function to print the linked list
void printLinkedList()
{
struct Node* temp;
temp = head;
// Traversing linked list
while (temp != NULL) {
cout << temp->data << " ";
temp = temp->next;
}
}
// Driver Code
int main()
{
CreateLinkedList(1);
CreateLinkedList(2);
CreateLinkedList(3);
CreateLinkedList(4);
printLinkedList();
return 0;
}
Java




// Java program to implement
// the above approach
import java.util.*;
public class GFG{
// Structure of the linked list
static class Node {
int data;
Node next;
};
static Node head = null;
// Function to create nodes of
// the linked list
static void CreateLinkedList(int new_data)
{
Node new_node = new Node();
new_node.data = new_data;
new_node.next = head;
head = new_node;
}
// Function to print the linked list
static void printLinkedList()
{
Node temp;
temp = head;
// Traversing linked list
while (temp != null) {
System.out.print(temp.data+ " ");
temp = temp.next;
}
}
// Driver Code
public static void main(String[] args)
{
CreateLinkedList(1);
CreateLinkedList(2);
CreateLinkedList(3);
CreateLinkedList(4);
printLinkedList();
}
}
// This code is contributed by 29AjayKumar

Output:

4 3 2 1

Linked list in Stack vs Heap Memory

S No. Linked List in Stack Memory Linked List in Heap Memory
1 Linked list in stack will get access to relatively small memory that is not dynamically expandable. Linked list in heap will get access to dynamically expandable memory.
2 Each node created in the linked list and stored in the stack will get linked deleted after it goes out of scope. There is a need to free the memory for the node to be deleted.
3 If there is a need to store a linked list in the stack area then implement a linked list without using malloc or a new function. If there is a need to store linked list in heap then implement linked list using malloc or new function.
4 Linked list nodes created in the stack cannot be accessed after the the scope of function ends. Linked list nodes created and stored in heap memory can be accessed after the scope of the function ends.




Article Tags :
Data Structures
Heap
Linked List
Stack
memory-management
Practice Tags :
Data Structures
Linked List
Stack
Heap
Read Full Article

Dynamic memory allocation

Before we dive in, check out this animated explanation of pointers. Fun!

C does not have language support for dynamically allocating new ‘things’. Instead, the programmer has to call a library function called malloc() to allocate a new chunk of memory from the heap segment, and later call free() to return that chunk of memory to the heap. The programmer has to remember to initialize the chunk of bytes received from malloc() – which otherwise should be assumed to contain random data. The programmer has to be careful allocate a large enough chunk to hold the data she intends to store there, and not to use pointers to write “outside” that chunk of memory. Lots of flexibility and power - but as with any great power, you must take great care in using it.

In Java, you can use new to dynamically create a new object, and delete to discard an object created with new, but for the most part the Java compiler and runtime handles object deletion and memory recovery automatically - it’s called ‘garbage collection.’

There are four related functions you should understand:

  • malloc p = malloc(n) - allocates n bytes of heap memory; the memory contents remain uninitialized.
  • calloc p = calloc(count, size) allocates count*size bytes of heap memory and initializes it all to zero; this call is appropriate when you want to allocate an array of count items, each of size bytes.
  • realloc p = realloc(p, n) - where p is a pointer to heap memory - expands (or shrinks) its allocation to n bytes.
  • free free(p) - where p is a pointer to heap memory - releases that portion of heap memory for future use.

Our examples today show how to use malloc to allocate space to store a string, and later, a struct holding aggregate types. For an example using calloc and realloc, read about how readlinep() works.

Memory allocation of Linked List nodes

The nodes that will make up the list’s body are allocated in the heap memory. We can allocate dynamic memory in C using the malloc() or calloc() function. malloc() takes a single argument (the amount of memory to allocate in bytes). In contrast, calloc() needs two arguments (the total number of variables to allocate in memory and the size in bytes of a single variable). malloc() does not initialize the memory allocated, while calloc() guarantees that all bytes of the allocated memory block have been initialized to 0. To deallocate the allocated memory, we can use free().

1
2
3
4
5
6
7
8
9
10
11
12
// Helper function in C to return new linked list node from the heap
struct Node* newNode(int data)
{
// allocate a new node in a heap using `malloc()` and set its data
struct Node* node = (struct Node*)malloc(sizeof(struct Node));
node->data = data;
// set the `.next` pointer of the new node to point to null
node->next = NULL;
return node;
}

Video liên quan

Postingan terbaru

LIHAT SEMUA