If we XOR all the numbers together, for each bit, if there were originally an odd number of 1s in that bit, the result will be 1; otherwise, it will be 0.

If n is even, even in the the worst-case scenario, we can set all numbers' bits in a specific position to 1 first, and then XOR them to make them all 0.

If n is odd, the process is similar. We set a specific bit to 1 in the first step, then set the first n-1 numbers to 0, and finally, set the last two numbers to 1 and then to 0.

Java

C++

Firstly, passing through non-major cities won't yield a better solution. Also, passing through only one main city won't provide a better solution either.

So, except for the direct route from city a to city b, the only possible better solution is to separately find the nearest main cities to a and b.

Enumerate these two main cities, and find the optimal solution.

Java

C++

The main construction method is to shift each row by one position. For example: $$

\begin{bmatrix}

0 & 1 & 2 & 3 & 4 \\

4 & 0 & 1 & 2 & 3 \\

3 & 4 & 0 & 1 & 2 \\

2 & 3 & 4 & 0 & 1 \\

1 & 2 & 3 & 4 & 0 \\

\end{bmatrix}

$$

For m>n, construct it as mentioned above directly. Because the number of columns is greater than the number of rows, any columns beyond n should be filled with 0.

For m less than or equal to n, because the above construction method lacks 0, start from the m-1th row stop shifting the elements.

Java

C++

Clearly, if the average is not an integer, there is no solution.

Furthermore, if $a_i=avg$, the number of candies given away and received back by $a_i$ must be equal.

Since each person has to give away and receive candies once, those who have the average number of candies don't need to be considered; just skip them.

For the remaining people, whether they receive or give away candies, it's $2^?$ candies. So the binary representation of the difference between his original number of candies and the average must consist of several 1s followed by several 0s. For example, 111100 or 1111.

Therefore, we can determine how many candies each of these people receives or gives away based on the changes in their number of candies.

In the end, the candies given away must correspond to the recipients.

Java

C++

Different to the Easy version, each person can choose not to give away or not to receive candies.

So, for those whose number of candies is exactly equal to the average, we can ignore them.

For the remaining people, if the binary representation of the difference in candies contains more than one 1, they must both give away and receive candies. Only cases with a single 1 in their binary representation can choose to either give away or receive candies (e.g., 1000 can choose to give or receive, but 1100 cannot).

For those who can choose not to exchange candies, they provide more flexibility.

If a person needs to receive $2^x$ candies, they can choose to receive $2^{x+1}$ candies and give away $2^x$ candies. Conversely, if they need to give away $2^x$ candies, they can do the opposite.

In other words, these people, besides adapting to their demand, can also switch to x when x+1 cannot be accommodated.

So, unlike the Easy version, we need to differentiate between hard and soft constraints. When processing x from large to small, if a hard requirement or supply of x cannot be handled, we need to see if x-1 can be supplemented.

Note: when processing x, if the soft demand or supply hasn't been used for x+1, it will also be converted into a hard demand or supply.

Java

C++

This problem feels like a mash-up of two classic (or simple) problems, forced into a single question.

]]>For convenience, let's first define some functions and variables:

- $len(i,j)$ represents the number of nodes visited from node $i$ to node $j$ along the entire path. For example, in Sample 1, $len(2,3)=3$, $len(1,2)=1$.
- $max\_deep$, represents the height of the entire tree. For example, when $n=2$, $max\_deep=2$. When $n=9$, $max\_deep=4$.
- $deep(node)$ calculates the depth of a specific node.
- $number\_of\_nodes\_at\_layer(layer)$ indicates how many nodes there are in a certain layer in this tree. For example, when $n=4$, $number\_of\_nodes\_at\_layer(2)=2$, $number\_of\_nodes\_at\_layer(3)=1$.

The problem can be broken down into three main steps:

**Step 1:** Maintain a $dp\_s[i][j]$, representing all possibilities of a path of length $j$ where the maximum value $s$ is $i$. Here, $1\leq i\leq m$，$1\leq j \leq 2 \cdot max\_deep - 1$.

(In other words, the longest possible path in this tree contains at most $2 \cdot max\_deep - 1$ nodes)

**Step 2:** Maintain a $dp\_len[i]$, representing the total number of paths with length $i$ in this tree.

For instance, when $n=4$, $dp\_len[2]=3$, $dp\_len[1]=4$.

**Step 3:** Calculate the final result. The final result equals $\sum_{len=1}^{2 \cdot max\_deep - 1}dp\_len[len]\cdot m^{n-len} \cdot (\sum_{s=1}^{m} s\cdot dp\_s[s][len])$.

This means, for each specified length of the path $len$:

There are $dp\_len[i]$ different paths.

Nodes outside of this path can be chosen arbitrarily, leading to $m^{n-len}$ possibilities.

Nodes within this path, for each possible $s$, offer $dp\_s[s][len]$ choices. Finally, the result needs to be multiplied by $s$ in the calculation.

Two key points:

- The expression $\sum_{i=1}^n\sum_{j=i}^n s_{i,j}$ involves summation, and the final result also requires a double summation over this. It is natural to consider rearranging $s_{i,j}$. Unify the summation.
- Since $dp\_s[i][j]$ is relatively easy to obtain, the only thing missed from the final result is $dp\_len[i]$. This problem seemed to resemble a classic one, and it was assumed there should be an existing solution.

Firstly, a straightforward combinatorial method came to mind. It's clear that $dp\_s[i][j]=\sum_{k=1}^{j} C_j^k\cdot (i-1)^{j-k}$.

In other words, select $k$ out of $j$ nodes and set their value to $i$, while the values of other nodes can be any value between $1$ and $i-1$.

However, this method has an issue with its time complexity: $O(m\cdot max\_deep^2)\approx 10^5 \cdot {(10^2)}^2 \approx 10^9$. This is because for each $i,j$, we need to loop through $k$.

So, I tried to observe the relationship between $dp\_s[i][j]$ and other nodes. I noticed that $dp\_s[i][j]=i\cdot dp\_s[i][j-1] + \sum_{k=1}^{i-1} dp\_s[k][j-1]$.

This means either the maximum value of the previous $j-1$ nodes was already $i$, allowing the new node to be any value between $1$ and $i$.

Or the previous maximum value was less than $i$, and the new node must be $i$.

The term $\sum_{k=1}^i dp\_s[k][j-1]$ can be maintained as a prefix sum, so the complexity is $O(m\cdot max\_deep)$.

There are various methods to solve this, and I chose a relatively straightforward approach.

For a given path, if we know its starting and ending points, we can determine the length of this path and all the nodes it passes through.

Furthermore, if we know the starting point, the topmost point, and the depth of the ending point of this path, we can calculate: 1. The length of this path; 2. The total number of such paths.

Let's denote the starting point as $from$, the depth of the topmost point as $father$, and the depth of the ending point as $to$. It's not hard to find that for each pair $[from, father, to]$, the length of this path is $[father-deep(from)] + (father - to) + 1$, and there are $2^{to - father -1}$ possible paths.

So, $dp\_len[father-deep(from) + (father - to) + 1]+=number\_of\_nodes\_at\_layer(deep(from))\cdot 2^{to - father -1}$. In other words, we iterate through $deep(from)$, for each starting point with depth $deep(from)$, we iterate through the depth of the topmost point and the depth of the ending point, and add the result to $dp\_len[i]$ one by one.

(**Note that this idea chooses a fixed starting point, not the depth of the starting point.**)

There are two issues here:

1. When $father=to$, there is only 1 possible path. We handle this as a special case.

2. When $deep(from)=to$, many paths are counted twice. This is more complicated to handle.

For the second issue, I chose to handle it separately.

So, first, in the calculation process mentioned above, when enumerating $to$, limit its range to $deep(from)-1$. This ensures that at least the starting and ending points have different depths, making our results correct. It also handles non-complete binary trees.

For the case where the depths of the starting and ending points are the same, we consider the situation of the topmost point of the path. Because two leaf nodes must be in the left and right subtrees of the topmost node.

So, For non-complete binary trees, there are three possible scenarios for these left and right subtrees: 1. Both subtrees are complete; 2. At least one of subtree is not complete; 3. At least one of subtrees don't reach the required depth.

Let's denote the depth of the starting and ending points as $target$ and the depth of the topmost point as $father$.

For the first scenario, for each $father$, there are $(2^{target-father -1})^2$ possible selections. There are $\lfloor\frac {number\_of\_nodes\_at\_layer(target)} {2^{target-father}}\rfloor$ fathers satisfying this condition.

For the third scenario, obviously, there are 0 possibilities.

For the second scenario, there is only one father that satisfies this condition. To calculate its selection count, we need to find out how many leaf nodes are in its right subtree. It can be observed that the right subtree has $\max([number\_of\_nodes\_at\_layer(target) \% (2^{target-father})] - (2^{target-father -1}),0)$ leaf nodes, denoted as $num\_right$ (subtracting left subtree and first scenario). So, its selection count is $num\_right \cdot (2^{target-father -1})$.

Finally, add up all the above results to $dp\_len[i]$. Which is $dp\_len[(target-father) * 2 + 1]+=(2^{target-father -1})^2\cdot \lfloor\frac {number\_of\_nodes\_at\_layer(target)} {2^{target-father}}\rfloor +$$(2^{target-father -1}) \cdot \max([number\_of\_nodes\_at\_layer(target) \% (2^{target-father})] - (2^{target-father -1}),0) $

Java

C++

]]>It is only possible to divide the array into two groups with the same parity if the total sum is even.

Java

C++

Round from right to left sequentially.

Java

C++

Firstly, let's consider the scenario where $a$ contains no duplicate elements.

Obviously, $\min(a_i)$ will appear $n-1$ times, the second smallest $a_i$ will appear $n-2$ times, and so on. The second largest number will appear 1 time, and the largest number won't appear.

Next, consider the scenario with possible duplicates. There is no difference. Because, for example, if there are two identical numbers both being the second smallest one, then it can be considered as the second smallest and the other as the third smallest.

In summary, count the occurrences of each number in $b$.

Then, iterate from smallest to largest. Suppose the $b_i$ is the $j$-th smallest number. Subtract $n-j$ from $count[b_i]$. If $count[b_i]$ doesn't become 0, it means the $(j+1)$-th smallest number is the same as the $j$-th smallest number.

Java

C++

$a_u-a_v \geq b_u-b_v$ is equivalent to $a_u-b_u \geq a_v-b_v$.

So, directly compute $a_i-b_i$ for any point $i$. The maximum $a_i-b_i$ is the answer.

Java

C++

Firstly, it's not difficult to realize that the essence of $\sum\limits_{p=1}^{10^9}f_p$ is the sum of lengths of all intervals.

For each $p$, if it lies in $[s, x_i]$, it will add 1 to the result. So, every number within the interval will be added exactly once.

Now, it's easy to see that we can sort the $x_i$. This way, we can enumerate $s$ from small to large.

When $s$ changes from $x_i$ to $x_{i+1}$, the length of all intervals on the left side (including $i$) increases by $x_{i+1}-x_i$. The length of all intervals on the right side (excluding $i$) decreases by $x_{i+1}-x_i$. So, $ans[i+1]=ans[i]+i\cdot(x_{i+1}-x_i)-(n-i)\cdot(x_{i+1}-x_i)$.

Finally, sort $ans$ back based on the indices.

Java

C++

Substituting $a_j=x-a_i$ into $a_i\cdot a_j=y$ results in $a_i^2-x\cdot a_i+y=0$. So, $a_i=\frac {x\pm\sqrt{x^2-4\cdot y}} {2}$.

Therefore, it only needs to check if $a_i$ has integer solutions and count how many such $a_i$ exist.

Note that $a_j$ calculated might be equal to $a_i$. Also, $a_i$ might have two solutions, but it could also have only one solution.

Java

C++

Consider the process of constructing the minimum spanning tree; we consider adding each edge to the tree sequentially.

When merging two subsets of the disjoint set union (DSU), besides the edge connecting the two subsets, there could also be $size[u]\cdot size[v] - 1$ edges can be added. These edges could have $s-w$ possible lengths (where $w$ represents the length of the new added edge). Count in no add any more edges, each possible edge could have $\max(s-w+1, 1)$ possibilities.

So, merging two subsets could result in $\max(s-w+1, 1)^{size[u]\cdot size[v] - 1}$ possibilities.

Therefore, when adding the $i$-th edge, the new subset could have $ans[u]\cdot ans[v] \cdot \max(s-w+1, 1)^{size[u]\cdot size[v] - 1}$ possibilities (where $ans[u]$ represents the number of possibilities of subset $u$).

Java

C++

]]>Both players will prioritize button $c$. So, Anna can win an additional point if and only if $c$ is odd. Then, it only matters if $a-b$ is greater than zero. If it's less than or equal to 0, Anna will definitely lose; otherwise, she will win.

Java

C++

Since he will definitely eat a cookie every d minutes at least, the influence of removing one seller is only between two adjacent sellers.

So, removing a seller $i$ changes the calculation from $\lfloor\frac {s_{i+1}-s_{i}} d\rfloor + \lfloor\frac{s_i - s_{i-1}} d\rfloor + 1$ to $\lfloor\frac {s_{i+1}-s_{i-1}} d\rfloor$.

Therefore, we only need to check which seller's removal has a bigger impact.

Java

C++

Firstly, it's not difficult to realize that there can be at most $\lfloor \frac n 2 \rfloor$ different results.

Assuming there exists $x>\lfloor \frac n 2 \rfloor$, the best scenario would be $a_i=x$ and $a_{i+1}=2\cdot x$, which $a_{i+1}>n$.

From this, it can be guessed that there are at most $\lfloor \frac n 2 \rfloor$ different results. So, let's try to find a way to construct them.

It's not hard to see that $\lfloor \frac n 2 \rfloor$ must be adjacent to $2\cdot \lfloor \frac n 2 \rfloor$. Because $3\cdot \lfloor \frac n 2 \rfloor>n$.

Similarly, $\lfloor \frac n 2 \rfloor - 1$ must be adjacent to $2\cdot (\lfloor \frac n 2 \rfloor - 1)$.

For example, if $n=21$, then 10 is adjacent to 20, and 9 is adjacent to 18.

Thus, through this method, we can construct up to around $\lfloor \frac n 4 \rfloor$. For example, 20 can be constructed to be adjacent to 6 and 12. But making 5 adjacent to 10 is not obvious, as 10 is already adjacent to 20.

However, it can be observed that 4 doesn't need to be adjacent to 9; instead, it should be adjacent to $2\cdot 4=8$.

In this way, each number $a_i$ only needs to be adjacent to $2\cdot a_i$.

For $n=21$, it would be $[1,2,4,8,16]+[3,6,12]+[5,10,20]+[7,14]+[11]+[13]+[17]+[19]+[21]$. In simpler terms, it's similar to the Sieve of Eratosthenes algorithm for finding primes.

Java

C++

Firstly, considering the data constraints, $n^2=9\cdot 10^6 \approx 10^7$.

Because the importance of $l_0$ and $l_1$ is not linear with the increase of a, for instance, $l_0=1, l_1=100$ and $l_0=2, l_1=90$ versus $l_0=5, l_1=30$, the choice would differ as a changes.

For $a=1$, $1\cdot 1 + 100= 101$ is optimal, but for $a=20$, $20\cdot 2 + 90 = 130$ is optimal. For $a=100$, $100\cdot 5 + 30=530$ is optimal.

So, if we can calculate the maximum $l_0$ for any given $l_1$, we can obtain the result within the range of $n^2$. In other words, we need to calculate a $dp[l_1]=\max(l_0)$.

The idea pauses here for now.

Obviously, $l_0$ can be calculated within $n\cdot k = n^2$ as the longest contiguous substring of 0s that can be modified at most k times. That is, $dpLeft[i][j]$ represents the longest contiguous substring for the i-th tree **from left to right**, modified at most j times.

Similarly, you can maintain a $dpRight[i][j]$ representing the longest contiguous substring for the i-th tree **from right to left**, modified at most j times.

Now, integrate all the parts together.

It can be considered that the longest contiguous 1s can be enumerated within $n^2$.

Suppose the interval being considered is $[left, right]$, and the number of operations required to change all numbers in any interval to 1 can be maintained as a prefix sum, denoted as $count[right]-count[left-1]$.

Let $remian=k-(count[right]-count[left-1])$, then $dp[right-left+1]=\max(dpLeft[left-1][remain], dpRight[right+1][remain])$.

Java

C++

]]>Simply match column by column.

Java

C++

If $b_i \geq b_{i-1}$, there is no need to make any changes; just set $a_i = b_i$ and $a_{i-1} = b_{i-1}$.

For $b_i < b_{i-1}$, insert $b_i$ in the middle, i.e., $a_i = b_i$, $a_{i-0.5} = b_i$, and $a_{i-1} = b_{i-1}$.

Java

C++

Calculate the length needed for placing the laid horizontally. For example, if you have [4, 2, 1] and you want to place them horizontally, it would be [3, 2, 1, 1], denoted as array b.

You can observe that for any x, if there are c indices i such that $a_i \geq x$, then the corresponding $b_x$ is equal to c.

For example, in [4, 2, 1], there are 3 elements greater than or equal to 1, so $b_1 = 3$. Thus, b = [3, 2, 1].

(In comparison to the actual horizontal placement, the last '1' was actually omitted, but it's not significant. This is because for $a_1$ to match n, it must be equal to n.)

Finally, compare the original array a with b. If they are the same, it's a YES; otherwise, it's a NO.

Java

C++

Clearly, if there is only one of each ice cream ball type, you can make $C_x^2$ different ice creams for x types.

Since the problem allows combining two same ice cream balls into one, each type of ice cream ball can have at most two. Moreover, for every type that has two ice cream balls, you can create one more type of ice cream.

So, x types of ice cream balls (any quantity) can result in a minimum of $C_x^2$ different ice creams and a maximum of $C_x^2+x$ different ice creams.

You can perform binary search to find the minimum and maximum x needed to achieve the desired number of different ice creams.

Then, you can brute force each x within this range to find the minimum number of ice cream balls required.

Java

C++

Firstly, as long as you are not completely avoiding watching movies, the total mood points deducted for watching all movies, regardless of $a_i$, will only depend on the day of the last movie you watch.

For example, if you watch the last movie on the third day, you only need to subtract $3 \cdot d$. After that, it's irrelevant to d.

What's left to consider is the sum of several $a_i$ values. Naturally, you'd want to choose a positive $a_i$ as large as possible.

So, you can enumerate the day x when you watch the last movie. Then, you can use a priority queue to sort all $a_i$ values that were before the day x . If the length of the priority queue exceeds m, you discard the smallest values.

*One issue is that $a_x$ might not be chosen as the last movie day. It's possible that the last movie day is actually y. However, since you deducted an extra $(x-y) \cdot d$, this will not lead to a better result than before.*

Java

C++

The time limit is 4 seconds, and Codeforces can handle more than $10^6$ operations per second. So, this $O(n \cdot \sum{s}) = O(n^2 \cdot s)$ solution with approximately $10^8$ operations can pass comfortably and runs in about 800ms (even in Java).

First, you can solve a 0/1 knapsack problem to find all possible total sums of these n monsters.

Once you have this total, you can binary search for the required time $ans$. Assuming it takes $ans$ seconds, you can then binary search for the largest combination that is less than $ans \cdot w$. If the remaining monsters can be defeated in less than $ans \cdot f$ time, you can defeat all monsters in $ans$ seconds.

Java

C++

Firstly, Java users should be cautious when using Java features, as it can lead to TLE (Time Limit Exceeded). For example, consider the following:

It's easy to observe that, no matter what, the difference between adjacent numbers in the sorted array will reduce by 1 each time.

Also, regardless of the transformation, smaller numbers will never exceed larger numbers; at most, they will become the same and merge.

So, if two adjacent close numbers need to become the same, and their difference is x, it will always take x steps.

Assuming b is the sorted version of a (just for explanatory purposes, not necessary to compute), the problem boils down to finding the maximum $b_i - b_{i-1}$ and the maximum $a_i$.

Maintaining the maximum $a_i$ is straightforward with a set.

In each operation, remove the old $a_i$ and add the new $a_i$ to the set.

You can also maintain the maximum $b_i - b_{i-1}$ using a set.

In each operation, find the two adjacent numbers of $a_i$ (denoted as pre_left and pre_right) and remove $a_i - \text{pre_left}$ and $\text{pre_right} - a_i$ from the set. Then, add $\text{pre_right} - \text{pre_left}$ to the set. So we effectively removing the old $a_i$.

Next, in the set that maintains the size of $a_i$, find the two adjacent numbers of the new $a_i$ (denoted as next_left and next_right) and remove $\text{next_right} - \text{next_left}$. Adding $a_i - \text{next_left}$ and $\text{next_right} - a_i$ to the set. So we effectively adding the new $a_i$.

Note that since the values of $a_i$ can be repeated, and hence $b_i - b_{i-1}$ can also be repeated, both sets should maintain both the value and the index.

Java

C++

]]>Suppose you need to sort an array or have a TreeSet that requires a specific order.

Let's take the following class as an example:

```
class Node {
int index, value;
public Node(int index, int value) {
this.index = index;
this.value = value;
}
}
```

When defining a TreeSet for this class, you can use one of the following two approaches:

```
new TreeSet<>(Comparator.comparingInt((Node o) -> o.value).thenComparingInt(o -> o.index));
```

```
new TreeSet<>((o1, o2) -> o1.value != o2.value ? Integer.compare(o1.value, o2.value) : Integer.compare(o1.index, o2.index));
```

Clearly, the first approach is elegant, but unfortunately, it's slower. It's recommended to use the second approach. In fact, the second approach can still be slow; you can use `o1.value - o2.value`

directly. You could even consider combining two integers into a long for comparison.

In short, the Java is highly engineered, with many checks and a deep call stack.

Methods like comparingX often require passing a lambda function, and their source code typically includes null checks. If you also need to use thenComparingX, the call stack becomes even deeper. The call stack for the first approach is approximately as follows:

```
treeSet -> comparator_2 -> comparator_1 -> lambda_1 -> lambda_2
```

On the other hand, the call stack for the second approach is simpler:

```
treeSet -> comparator
```

In cases where the necessary logic is the same, the first approach results in a lot of additional function calls, making it slower.

The data size is very small, all within 100. So, you can simply simulate pouring water from the larger vessel to the smaller one step by step.

Java

C++

Obviously, once you pass a trap, you must return after $\lfloor{\frac{s-1}{2}}\rfloor$ steps.

So, we only need to calculate the maximum value of k for each trap that is stepped on and returned to, and then take the minimum.

Java

C++

Assuming a and b exist, and their greatest common divisor is g. Then, $(a+b)=g\cdot(\frac a g + \frac b g)$. In other words, a+b must be divisible by g, and $\frac a g + \frac b g > 1$.

So, we only need to find any composite number between l and r. In this way, we choose one of its factors as g, set a=g, and $b=\frac x {g} - 1$.

Java

C++

Clearly, the least common multiple of x and y corresponds to a number that does not affect the final result. So, we just need to make the remaining numbers corresponding to x as large as possible, and the remaining numbers corresponding to y as small as possible.

Java

C++

I'm not entirely sure if there is a simpler way to solve this problem. I directly used a segment tree, maintaining the XOR sum of intervals corresponding to g=0 and g=1.

Each update operation involves swapping the XOR sums of g=0 and g=1. Finally, you can query the XOR sum of the entire interval.

Java

C++

Consider the fear value $a_i$ of animal i as a directed edge from i to $a_i$. This gives us a directed graph, where each point in this graph can have only one outgoing edge and any number of incoming edges. The structure of this graph should roughly resemble something like the following:

Naturally, for points with no incoming edges, removing them has no effect on the income. So, we prioritize removing points with an in-degree of 0.

This leaves us with several cycles.

For these cycles, we choose to remove the point with the minimum value. Therefore, we find the point x with the minimum value on the cycle and remove $a_x$.

Java

C++

First, if $a_i\geq 2$, then choosing multiplication over addition is obviously better. So, the fundamental problem lies in how to handle $a_i=1$.

It's clear that the 1s at the far left and far right must be excluded from the range [l, r]. So, we can assume that the beginning and end of the array are not 1s.

In that case, the remaining situations are [2,1,1,1,1,1,1,2] and [3,1,3], where there are many 1s in between. In such cases, addition is better than multiplication.

However, the maximum number of 1s in between can be at most $10^5$, which means the total sum of 1s in between can be at most $10^5$. So, as long as there are approximately 20 occurrences of 2, multiplication will always be better no matter how you choose the range (excluding the leftmost and rightmost 1s). This is because $2^{20}>10^5$.

So, the general idea is that when the product of the non-1 numbers in the array reaches a certain threshold, you can simply choose the largest possible interval for l and r (excluding the leftmost and rightmost 1s). Otherwise, you can brute force the positions of l and r.

It's worth noting that the threshold should not be $10^5$. For example, it could be something like $[2, 1, 1, ..., 1, 5\cdot 10^4]$. The result of choosing multiplication would be $10^5$, but the result of addition would be $(10^5-2)+2+5\cdot 10^4=1.5 \cdot 10^5$.

The exact threshold value I didn't calculated, it's simply given as $10^6$. In any case, $2^{32} > 10^9$, which means that it's less than 32 numbers no matter what.

Java

C++

]]>⚠️

I am personally more conservative and default to a pessimistic estimate, so there may be an underestimate of ChatGPT.

Recently, ChatGPT has become very popular, so I also want to try it out. I chose to try it out on finance and code, which are the areas I care about most.

The entire trial was short and mainly focused on having ChatGPT calculate the IRR (Internal Rate of Return) and help me write code to calculate the IRR.

The conclusion is that ChatGPT is powerful but still has a lot of room for improvement.

First, I tried to ask about the price of a fund a month ago and the stock price of a company. Although this information is publicly available, I felt that ChatGPT was considering legal risks, so it did not provide this data (or if it had the data, it did not use it).

Then, I asked ChatGPT questions about asset allocation strategies. Perhaps because these are already considered general investment advice, ChatGPT did not dodge and gave relatively reasonable answers.

Then I tried to ask ChatGPT about a specific asset allocation scenario and asked it to help me calculate the internal rate of return (IRR).

First, I asked ChatGPT if it understood the meaning of IRR. It answered that it understood and was willing to calculate for me. Next, I provided a cash flow sample from last year and asked it to calculate the IRR for the whole year of last year. Interestingly, the Excel operation method it gave seemed to have no major problems and it understood my language description. However, the result it gave was completely wrong. It is obvious that the correct result should be less than 5%.

Next, I asked ChatGPT to provide Java code to calculate IRR. So he gave me the following code:

```
import java.math.BigDecimal;
import java.math.RoundingMode;
public class IRRCalculator {
public static BigDecimal calculateIRR(double[] cashFlows, double accuracy) {
BigDecimal irr = new BigDecimal(0.10);
BigDecimal x1 = new BigDecimal(0.0);
BigDecimal x2 = new BigDecimal(1.0);
while (irr.compareTo(x1) == 1 && irr.compareTo(x2) == -1) {
BigDecimal fx = new BigDecimal(0.0);
BigDecimal dfx = new BigDecimal(0.0);
for (int i = 0; i < cashFlows.length; i++) {
BigDecimal x = irr.add(new BigDecimal(1.0)).pow(i);
fx = fx.add(new BigDecimal(cashFlows[i]).divide(x, 10, RoundingMode.HALF_UP));
dfx = dfx.add(new BigDecimal(cashFlows[i] * i).divide(x.multiply(irr), 10, RoundingMode.HALF_UP));
}
BigDecimal tmp = irr.subtract(fx.divide(dfx, 10, RoundingMode.HALF_UP));
x1 = irr;
irr = tmp;
if (irr.compareTo(new BigDecimal(0.0)) == -1) {
irr = x1.multiply(new BigDecimal(0.5));
x2 = x1;
}
}
return irr.setScale(6, RoundingMode.HALF_UP);
}
public static void main(String[] args) {
double[] cashFlows = new double[]{-100, 20, 10, 0, 0, 0, 10, 0, 105};
BigDecimal irr = calculateIRR(cashFlows, 0.00001);
System.out.println("IRR: " + irr);
}
}
```

There are many problems with this code:

- The calculated IRR value is 3.148211%, which is obviously incorrect, and the correct result should be around 0.4%. The 10% calculated by ChatGPT earlier is really far off.
- In the code, the parameter "accuracy" is not used.
- The x1 and x2 as the range of IRR values have serious problems. Because the code uses 1 + IRR to calculate, the value of IRR should be negative in the case of loss. However, due to x2 being set as the upper limit, the code can only calculate situations where the return rate is between 0% and 100%.
- The code uses an enumeration method instead of a binary method, which may result in low efficiency.

So I warned ChatGPT of the problems in this code and asked him to modify it. As a result, he provided me with the following code:

```
import java.math.BigDecimal;
public class IRRCalculator {
private static final BigDecimal TOLERANCE = BigDecimal.valueOf(0.00001);
private static final int MAX_ITER = 100;
private final double[] cashFlows;
public IRRCalculator(double[] cashFlows) {
this.cashFlows = cashFlows;
}
public double calculateIRR(double accuracy) {
double x1 = 0.0;
double x2 = 1.0;
double fx1 = npv(x1);
while (Math.abs(x2 - x1) > accuracy) {
double fx2 = npv(x2);
if (fx1 * fx2 < 0.0) {
break;
}
x1 = x2;
fx1 = fx2;
x2 *= 2;
}
double irr = 0.0;
for (int i = 0; i < MAX_ITER; i++) {
irr = (x1 + x2) / 2;
double npv = npv(irr);
if (Math.abs(npv) < TOLERANCE.doubleValue()) {
break;
}
if (npv < 0.0) {
x1 = irr;
} else {
x2 = irr;
}
}
return irr;
}
private double npv(double irr) {
double npv = 0.0;
for (int i = 0; i < cashFlows.length; i++) {
npv += cashFlows[i] / Math.pow(1.0 + irr, i + 1);
}
return npv;
}
}
```

The result is that the problems with this code are even worse.

- The results are completely incorrect. Using the previous data, the calculated IRR is directly 100%.
- Ignored my third requirement. To be honest, this requirement is indeed a bit difficult to implement.
- I have no idea what this code is doing. But the code for calculating NPV that he wrote looks good.

Firstly, it is amazing that ChatGPT understands my intentions. However, as of now, it still has a long way to go.

The accuracy is still far from satisfactory, and it can only serve as an auxiliary tool for professional issues. (I don't think "barely" is too harsh a word to use.)

Firstly, if I am not familiar with IRR or the calculation process, a 10% IRR might not seem unreasonable to me. This means that to judge the accuracy of ChatGPT's results, I need to have a sufficient understanding of IRR.

However, if I have a sufficient understanding of IRR, ChatGPT may not have an advantage over other specialized calculators.

Regarding ChatGPT's code writing, I think there is still a long way to go.

Even if I make a optimistic estimate, its first IRR code considers the calculation of IRR under a given accuracy, and it is not much worse than the binary method. It also considers that the enumeration method is more readable, and we temporarily do not consider its correctness. However, this should not reduce my requirements.

Because the considerations of boundaries and complexity analysis must be fully understood in the context of "binary or enumeration method" and "possible value range" before using the code without problems.

As a result, I feel that my requirements have become even higher. I need to know not only how to write, but also to be vigilant at all times.

Obviously each number can only be replaced by the same letter, so we only need to open a Map to check whether any number has been replaced by two different letters.

Java

C++

Count the number and sum of odd and even numbers. For each query, we just need to multiply x by the number and add it to the sum.

Also note that when x is odd, the parity is changed. Therefore, the number and sum need to be added to another type.

Java

C++

Just do the simulation and record the indexes of all the green lights. Then check from left to right. As the pointer moves to the right, the corresponding green lights' pointer will gradually move to the right.

When it is found that the pointer of green light can no longer move to the right, it means that it needs to enter the next cycle.

Java

C++

First look at how many 2s are in all a[i], and then check how many 2s are in each i.

If there are not enough 2 in all a[i], then sort i, and choose i with more 2 as much as possible.

Java

C++

Obviously, if the condition is changed to $a \leq x \leq c$ and $b \leq y \leq d$, then just make x=a, y=b directly.

Therefore, it is naturally conceivable that we multiply x and y by a number as small as possible. For example 2.

Further we can find that if the initial values of x and y are a and b respectively. We give y a factor of x, give x a factor of y, and then multiply the reduced number by a number as small as possible to make it larger than its original value, which will give a better result than multiplying directly by 2.

For example, a is divisible by 2 and b is divisible by 3. Then exchange factors 2 and 3, so $x=\frac 3 2 \cdot a$, $y=\frac 2 3 \cdot b$. So x only increases by a factor of 1.5. y needs to be multiplied by some number to exceed b.

So we enumerate the cases where all the factors of x and y are exchanged with each other.

Java

C++

Obviously, if mex is greater than med, then all values smaller than med() must appear. That is to say, all numbers before the median must all appear, and other numbers don't matter.

So for a given length, the result of its med() must be fixed, and all values less than or equal to med() must appear.

So we only need to enumerate the length. For each length, we calculate the minimum interval that contains all the numbers that must appear, and then calculate how many ways to expand to the specified length based on this minimum interval.

Java

C++

]]>💡

The solution to this problem uses a more complex segment tree solution instead of a simpler greedy one.

- Sorts b while preserving b's original index.
- Count the number of available numbers for each $b_i$, and also count how many pending number of it. From these two values we can calculate a "number of redundancy".
- We find the smallest $b_i$ for which "number of redundancy" is 0.
- Find the j with the smallest original index among all $b_j$ from 0 to i. And assign the currently smallest available number to $b_j$.
- According to the allocation of 4, repeat #2 and subsequent steps until all numbers are allocated.

**Let's start with #3 first.**

Consider the number of numbers that can be assigned for each $b_i$.

For example, when $b=[3, 4, 7, 8]$, there are only 2 numbers that can be used for 3 and 4 which is 1 and 2. And there are 4 numbers that can be used for 7 and 8: 1, 2 , 5, 6.

It is not difficult to find that the two numbers that can be used by 3 and 4 must be assigned to 3 and 4, but not to 7 and 8. This is because 3 and 4 have two pending number, and 3 and 4 have exactly 2 numbers available.

Furthermore, we found that after the allocation of 3 and 4, there are only 2 usable numbers left for 7 and 8, but also only 2 pending numbers for 7 and 8 are left. (Initially there are 4 numbers that can be used for 7 and 8, and there are also 4 pending numbers).

Therefore, we can draw a conclusion: the allocation of small numbers will affect large numbers, and we should give priority to the smaller $b_i$ whose "number of redundancy" is 0.

For example $[3, 4, 7, 8]$, although the "number of redundancy" of 4 and 8 are both 0, but in #3, we prioritize 4 instead of 8.

**Then we go back to #2.**

We only need to consider how many $b_i$ that are smaller than itself is not allocated. So 3 actually only has 1 pending number, 4 has 2, 7 has 3, and 8 has 4.

In other words, 3 does not consider the situation of 4, and 7 does not consider the situation of 8.

Because if we change b to $[3, 5, 7, 8]$, 3 and 5 will not need to be processed in advance. So when calculating pending item, we only need to calculate how many $b_i$ are less than or equal to the current value.

**Let's move on to #4.**

We already know that 1 and 2 need to assign to 3 and 4. The next question is where to allocate 1. Obviously this depends on the original index which retained by #1.

We assign 1 to the smallest original index. That is, if the original 4 is in front of the 3, it is assigned to 4.

It should be noted here that after we have allocated 1, we cannot immediately allocate the remaining number (only 2 in this example). Because after an allocation, the "number of redundancy" of the remaining numbers may change.

For example, after 1 is allocated to 4, the "number of redundancy" of 3 changes from 1 to 0, because there are 2 numbers available for 3 but only 1 pending number originally. After 1 is allocated to 4, only 2 is left for 3.

This is more obvious in Example 6: $b=[8, 7, 4, 5]$ (after sorting $[4, 5, 7, 8]$). At the beginning, only the "number of redundancy" of 8 is 0, and then exactly 8 is the number with the smallest original index. But after 1 is allocated to 8, 2 cannot be allocated to the remaining number with the smallest original index(that is, 7), but to go back to #2 and process 4 and 5 in advance.

Let's take sample 6 as an example: $b=[8, 7, 4, 5]$.

#1, we sort and keep the original index. We get $[(4: 2), (5: 3), (7: 1), (8: 0)]$.

#2, we calculate the current "number of redundancy". We get $[(4, 2, 2), (5, 3, 1), (7, 1, 1), (8, 0, 0)]$

For example, 5 has [1,2,3] a total of 3 number is available, minus 2 pending numbers, so there is 1 "number of redundancy".

#3，only one "number of redundancy" is 0: (8, 0, 0).

#4, among all the numbers before 8, the smallest original index is 8. So we assign 1 to 8.

Repeat #2, the "number of redundancy" after update is: $[(4, 2, 1), (5, 3, 0), (7, 1, 0)]$

Here, because 8 has been allocated, it is deleted. At the same time 1 is no longer available, so the remaining "number of redundancy" are all decremented by 1.

#3, at this time the "number of redundancy" of 5 and 7 are both 0. We choose the smaller 5 to continue processing.

#4, among all numbers before 5 (referring to 4 and 5, excluding 7), the smallest original index is 4. So we assign 2 to 4.

Repeat #2 to get $[(5, 3, 0), (7, 1, 0)]$

Here, only 3 is left for 5 to be allocated, and since 4 is deleted, the number of pending item for 5 also becomes 1. So the "number of redundancy" of 5 is still 0.

#3 and #4, assign 3 to 5.

Repeat #2, #3, #4, assign 6 to 7.

Obviously, at any time, once the "number of redundancy" is less than 0, there must be no solution.

Look at #4 first, its essence is to find the minimum value of the original index within the interval from "0" to "$i$ obtained by #3". We immediately thought that the segment tree maintains the minimum value of the original index of the range.

For #3, we only need to maintain the minimum value of the "number of redundancy" of the range, plus the binary search, find the smallest $i$ of the subscript so that the minimum value of the "number of redundancy" of the interval is 0. It can also be implement through the segment tree.

Then consider the case of deleting the allocated $b_i$.

Note that all the minimum values of range are maintained on the segment tree, so after the number is allocated, we can change the value of its original index and the "number of redundancy" to infinity. In this way, this $i$ will never appear in subsequent process.

Finally consider the effect of assigning a number x to $b_i$ on the "number of redundancy" of other $b_j$.

For $b_j$ that is larger than $b_i$ itself, it does not actually affect the "number of redundancy". Because these $b_j$ will reduce an available number and also reduce a pending number. (far right)

For $b_j$ smaller than x, it will not be affected. Because this value is impossible to assign to them. (far left)

So in essence only when $b_j$ is between x and $b_i$, the "number of redundancy" will be reduced by 1. Because the available number of $b_j$ is reduced by 1, but the pending number will not be reduced. (middle)

But this is actually an range modification. Therefore, it can still be implemented with a segment tree.

Therefore, the entire implementation is to use the segment tree to maintain the minimum value of the original index and the minimum value of the "number of redundancy" of the range, and then find out the target i of #3 and the position of x with binary search.

Java

C++

]]>After sorting directly, see if the sum of the first two numbers is equal to the third number.

Java

C++

Sort directly, and then see if there are two adjacent numbers are equal.

Java

C++

Scan each row to see if you can find a whole row that is R. If so, then the last one painted must be R, otherwise it must be B. (You can’t paint nothing at all)

Java

C++

First, we found that the data range is only 1000. And the numbers that are relatively prime within 1000 can be found directly by $10^6$.

We put the found coprime numbers in 1000 lists, and for the $i$th list, only put all the numbers coprime with $i$ .

Then, for each number that appears in the array $a$, we record the largest index that appears, which is marked as max_pos. (If the result really chooses this number, it is better to choose a larger index than a smaller one.)

Finally, for each number $i$ that appears, go to the list at the beginning to find $j$ that is coprime with it, and then check whether the max_pos of $j$ exists, and then calculate the result of this pair, and finally take the largest one.

Java

C++

Obviously, binary search.

According to the array a, maintain a array: max. Where max[i] is the maximum value of a[i] from 0 to i. That is to jump to the i-th level, the step with the largest span that will be encountered in the process.

Also maintain a prefix sum of a.

Therefore, according to the input k, just binary search on the max array. The use the found i, and use the previously maintained prefix sum array to get the result.

Please note that the value range of a can reach $10^9$, so pay attention to the prefix sum array and use 64-bit int.

Java

C++

We count the number of letters in s and t.

Obviously, if there is a non-a letter in the string t, then directly put the non-a subtitle at the top, and at the same time, s puts a at the top.

Now t is full of letters a. Now suppose there is a letter in s that is not a.

If the number of letters a in s is less than t, then obviously s is greater than t anyway.

If the number of a in s is more than or equal to t, then s must be longer than t, so s is still larger than t.

In summary, as long as there is a letter other than a in s, then s must be greater than t.

In the end, there are only scenarios where both s and t are a's. Then just compare the length directly.

Java

C++

First of all, it is obvious that the array b is single-incremented. At the same time, according to the value range of a, b has at most 64 different numbers. Because it is nothing more than ORing one bit into 1 each time. (In fact, there are 32, but I don’t want to think about long or int, and it’s the same to write 64 directly)

Therefore, for the arrangement of array a, only the arrangement of the first 64 bits will affect the result. (If b is still increasing after 64 bits, you can put the corresponding a in front, and the result must be better)

So directly enumerate the first 64 numbers with brute force. For the i-th number, we only need to ensure that the corresponding b[i] is as large as possible.

Java

C++

]]>⚠️

This article comes with a lot of pictures

Nextcloud is an open source network storage system developed based on Owncloud. Nextcloud mainly provides good privacy while providing comprehensive functions.

The functions of the cloud storage function will not be described too much here, just like most cloud storage on the market. The files in the cloud storage can be accessed through the web page. There is also a client that can automatically synchronize files in the local directory to the cloud storage.

It can probably be compared to Baidu Cloud or Google drive.

What appeals to me most about Nextcloud is its chat function, because communication between family members often requires a high level of privacy. Except video chat, which has a particularly high requirement for network latency, so I can't use it. The supporting for daily functions such as general chat and sending pictures is very good.

Nextcloud provides a plug-in called "Talk". After installing it, you can use the chat function on the web page.

From the screenshot above, I definitely have to mask the chat content between me and my family. The message in the picture is actually from my monitoring and alarms service.

The bad news is that Grafana doesn't natively support Nextcloud Talk. But the good news is that Nextcloud provides enough API, here I built a service to forward Grafana's webhook. Moreover, Nextcloud also provides very detailed integration documentation.

What's more, Nextcloud provides clients for IOS and Android.

As for a series of tools such as WeChat, whatsapp, etc., they are basically not used for internal communication within the family. As for how to convince family members to use it, each in their own way.

Nextcloud can play videos on web pages or clients. Although it is not as good as some professional home theater services, it can meet basic needs.

The advantage is that as long as you can access your own Nextcloud service, and as long as you have enough bandwidth, you can play it wherever you are. And Nextcloud itself is a cloud disk. If you want to share videos, you only need one click.

The downside is that Nextcloud itself doesn't have the usual theater features like continuous playing. However, Nextcloud supports WebDav, so if you want to expand it is not difficult.

Sharing is a dangerous thing. Originally, I downloaded some video to watch that it is not a very big issue, but it is easy to have copyright problems once it is shared (based on China Law). In particular, family and friends may not have these sense. So on the one hand, explanation for family member is very important. On the other hand, mandatory sharing strategies are also necessary.

Nextcloud can force shares to expire, and force all shares to set a password.

For mobile devices, whether it is the Nextcloud client or the chat client, fingerprint password protection can be set.

(Cannot take screenshots due to phone security settings)

Nextcloud can set mandatory two-factor authentication, enable server-side encryption, set password policy, and automatically expire.

At the same time, Nextcloud also has plugins for LDAP or SAML. You can use this to log in to all services with a unified account within the family.

Nextcloud also provides photo map functionality (also a plugin called Map). Place various photos on the map based on the geographic location of the images.

While it seems that Apple offers a similar feature, so does Google. However, since Nextcloud already can solve the problem with one service, why should I share my information with a third party service?

But this is a matter of opinion. I have not convince nor tried to convince family members to use this service. After all, this one is different from chatting. The archiving of photos is a personal matter and there is no point to interfere with each other.

On Windows, the Nextcloud client also provides virtual file support (it doesn't seem to support Macs). The general effect is that you can choose to only synchronize the file list from the server to the local, and only download the file from the server when it is actually running or used.

First, Nextcloud can be started directly with pure Docker (including the database and the service itself). So, as long as the corresponding files are backed up, the entire service is backed up.

My personal approach is to deploy it directly on K8s and mount the Longhorn drive, which is a RAID 1. There is basically no service interruption due to hard disk damage. Also Longhorn can back up the hard drive regularly.

Therefore, once Longhorn's RAID 1 itself provides high reliability, and then regularly backing up Longhorn, it is basically very reliable.

Personally think: Nice!

]]>Although it is true that this self-signed certificate can be trusted on the client side. The disadvantage of this is that this self-signed certificate needs to be trusted once in various client devices. (For example, Microsoft's official Remote Desktop 8 in Android).

Because I use the certificate of let's encrypt, I note down how to set the certificate signed by let's encrypt to the server of windows remote desktop.

A Let's encrypt certificate. I won't go into details about how to generate a certificate from Let's encrypt here. Here we just synchronize what the certificate should be. First of all, the certificates should all be pem certificates, and the certificate should obviously contain the domain name of the server machine. For example `windows.xloypaypa.pub`

.

The certificates folder should contain the following:

```
.
├── cert.pem
├── chain.pem
├── fullchain.pem
├── privkey.pem
└── README
```

Pfx certificate is a certificate that contains both a public key and a private key. I personally don't know whether Windows Remote Desktop can use be configurated directly through a pair of pem certificate. But it is not difficult to convert to pfx anyway. All it takes is one line of command:

`openssl pkcs12 -export -out certificate.pfx -inkey privkey.pem -in cert.pem -certfile chain.pem`

And just give a password for above command.

Copy the pfx certificate to the server and double-click. Then follow the instructions to import the certificate into "Certificates" in the "Personal" folder.

- Be careful to install the certificate to the local computer, not the current user.
- Put the certificate in "Personal"

Once finished, it should look like this (there is a certificate just now in the personal folder):

Double-click the certificate on the right side of the certificate management to see the details of the certificate and get the thumbprint:

Then run in Powershell as administrator:

`wmic /namespace:\\root\cimv2\TerminalServices PATH Win32_TSGeneralSetting Set SSLCertificateSHA1Hash="<The certificate fingerprint obtained above>"`

At this point, the certificate configuration for Windows Remote Desktop Services is completed.

]]>💡

The main content of this article: A general introduction to how to integrate openstreetmap through leaflet under the conditions of typescript+react.

Instead of asking why choose openstreetmap, better to ask why not choose well-known maps such as Baidu Maps and Google Maps?

Because openstreetmap is open source and free, you can refer to their about page for details.

Taking Google Map as an example, if you integrate with Google Map, you will need to register for GCP, and you will be charged after a certain amount of usage is exceeded. For details, please refer to this document.

Other maps, such as ***du, are even less reliable than Google. In fact, I also want to avoid the use of services from commercial companies as much as possible.

Another problem is the consideration of reliability. There are two main aspects: 1. How many people use it? 2. is the stability reliable or not.

From this page, we can see that, in fact, this map is still used by some European and American governments, and Wikipedia and so on. So it is used by many people. In addition, the Nextcloud (I'm using) is actually integrated with this map also. So I think that although the number of users of openstreetmap is not as good as Google, it still have some users.

As for stability, first of all, I believe that European and American government departments must have also considered this issue. Secondly, my own Nextcloud also can be considered a deep usage. From the experience of about 4-5 years, I have never seen it broken.

Therefore, from the perspective of reducing commercial dependence, from the perspective of reliability, and after comprehensive consideration, I choose openstreetmap.

Because google search "react openstreetmap", the first result is React Leaflet

Leaflet actually integrated openstreetmap by default, so it is easier to integrate. However, Leaflet's documentation is still a little tricky.

First, a demo implemented by someone else that can work.

For installation, the official documentation is very clear. In fact, it just install three libraries leaflet, react-leaflet, @types/leaflet.

So just yarn add.

After installing the dependencies, if you implemented directly according to the "Setup" code in the official documentation, you will probably get the following picture:

The root cause is that the css of leaflet is not imported. There are two ways to import it. One option is: importing it in the index.html (referring the demo above). So just download it directly from the leaflet (external).

The other option is to directly import the css in the package to tsx file:

```
import 'leaflet/dist/leaflet.css';
```

After the import of css, the entire map can be displayed.

Similar problems like Maker icons missing, etc. it can be solved by similar solutions.

```
import icon from 'leaflet/dist/images/marker-icon.png';
import iconShadow from 'leaflet/dist/images/marker-shadow.png';
let DefaultIcon = L.icon({
iconUrl: icon,
shadowUrl: iconShadow
});
L.Marker.prototype.options.icon = DefaultIcon;
```

Have to say that the code on the official website is very nice.

Copy the code from the official website:

```
<MapContainer center={[51.505, -0.09]} zoom={13} scrollWheelZoom={false}>
<TileLayer
attribution='© <a href="https://www.openstreetmap.org/copyright">OpenStreetMap</a> contributors'
url="https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png"
/>
<Marker position={[51.505, -0.09]}>
<Popup>
A pretty CSS3 popup. <br /> Easily customizable.
</Popup>
</Marker>
</MapContainer>
```

In short, it is a simple and easy-to-use map tool, but the documentation is a little missing.

]]>💡

This article mainly solves the problem of installing ffmpeg and other dependencies in the Nextcloud which installed by docker, but not willing to write a new Dockerfile or use a third-party image.

```
docker run -p 80:80 --entrypoint /bin/sh nextcloud -c "apt-get update && apt-get install -y ffmpeg && /entrypoint.sh apache2-foreground"
```

By the following command:

```
docker image inspect nextcloud
```

We can see that the official image of Nextcloud is started by running /entrypoint.sh and giving a parameter "apache2-foreground"

Therefore, in order to install ffmpeg before Nextcloud starts, we only need to change the entrypoint to /bin/sh, then install the dependencies in CMD, and finally start the Nextcloud script.

Pros: No need to customize the image, it is more convenient for Nextcloud running in K8S. Every time you only need to directly pull the official image, you can ensure that your Nextcloud is the latest official version, and the dependencies will be installed automatically.

Cons: ffmpeg needs to be downloaded once every time. The startup speed is strongly dependent on the network speed.

Pros: Simple, quick and easy to operate. Through the code repository, Dockerfile can be used to implement version management of dependencies.

Cons: Every time you upgrade nextcloud, you need to rebuild image. And you need to have your own image repository (such as registering docker hub).

Pros: Super easy.

Cons: The version update is not timely. Third-party mirrors may have security risks.

Pros: There is nothing difficult in the world, as long as you are willing to give up. I use ffmpeg to make preview of video, and final experience is just so so.

Cons: You will lose a feature.