Both players will prioritize button $c$. So, Anna can win an additional point if and only if $c$ is odd. Then, it only matters if $a-b$ is greater than zero. If it's less than or equal to 0, Anna will definitely lose; otherwise, she will win.

Java

C++

Since he will definitely eat a cookie every d minutes at least, the influence of removing one seller is only between two adjacent sellers.

So, removing a seller $i$ changes the calculation from $\lfloor\frac {s_{i+1}-s_{i}} d\rfloor + \lfloor\frac{s_i - s_{i-1}} d\rfloor + 1$ to $\lfloor\frac {s_{i+1}-s_{i-1}} d\rfloor$.

Therefore, we only need to check which seller's removal has a bigger impact.

Java

C++

Firstly, it's not difficult to realize that there can be at most $\lfloor \frac n 2 \rfloor$ different results.

Assuming there exists $x>\lfloor \frac n 2 \rfloor$, the best scenario would be $a_i=x$ and $a_{i+1}=2\cdot x$, which $a_{i+1}>n$.

From this, it can be guessed that there are at most $\lfloor \frac n 2 \rfloor$ different results. So, let's try to find a way to construct them.

It's not hard to see that $\lfloor \frac n 2 \rfloor$ must be adjacent to $2\cdot \lfloor \frac n 2 \rfloor$. Because $3\cdot \lfloor \frac n 2 \rfloor>n$.

Similarly, $\lfloor \frac n 2 \rfloor - 1$ must be adjacent to $2\cdot (\lfloor \frac n 2 \rfloor - 1)$.

For example, if $n=21$, then 10 is adjacent to 20, and 9 is adjacent to 18.

Thus, through this method, we can construct up to around $\lfloor \frac n 4 \rfloor$. For example, 20 can be constructed to be adjacent to 6 and 12. But making 5 adjacent to 10 is not obvious, as 10 is already adjacent to 20.

However, it can be observed that 4 doesn't need to be adjacent to 9; instead, it should be adjacent to $2\cdot 4=8$.

In this way, each number $a_i$ only needs to be adjacent to $2\cdot a_i$.

For $n=21$, it would be $[1,2,4,8,16]+[3,6,12]+[5,10,20]+[7,14]+[11]+[13]+[17]+[19]+[21]$. In simpler terms, it's similar to the Sieve of Eratosthenes algorithm for finding primes.

Java

C++

Firstly, considering the data constraints, $n^2=9\cdot 10^6 \approx 10^7$.

Because the importance of $l_0$ and $l_1$ is not linear with the increase of a, for instance, $l_0=1, l_1=100$ and $l_0=2, l_1=90$ versus $l_0=5, l_1=30$, the choice would differ as a changes.

For $a=1$, $1\cdot 1 + 100= 101$ is optimal, but for $a=20$, $20\cdot 2 + 90 = 130$ is optimal. For $a=100$, $100\cdot 5 + 30=530$ is optimal.

So, if we can calculate the maximum $l_0$ for any given $l_1$, we can obtain the result within the range of $n^2$. In other words, we need to calculate a $dp[l_1]=\max(l_0)$.

The idea pauses here for now.

Obviously, $l_0$ can be calculated within $n\cdot k = n^2$ as the longest contiguous substring of 0s that can be modified at most k times. That is, $dpLeft[i][j]$ represents the longest contiguous substring for the i-th tree **from left to right**, modified at most j times.

Similarly, you can maintain a $dpRight[i][j]$ representing the longest contiguous substring for the i-th tree **from right to left**, modified at most j times.

Now, integrate all the parts together.

It can be considered that the longest contiguous 1s can be enumerated within $n^2$.

Suppose the interval being considered is $[left, right]$, and the number of operations required to change all numbers in any interval to 1 can be maintained as a prefix sum, denoted as $count[right]-count[left-1]$.

Let $remian=k-(count[right]-count[left-1])$, then $dp[right-left+1]=\max(dpLeft[left-1][remain], dpRight[right+1][remain])$.

Java

C++

]]>Simply match column by column.

Java

C++

If $b_i \geq b_{i-1}$, there is no need to make any changes; just set $a_i = b_i$ and $a_{i-1} = b_{i-1}$.

For $b_i < b_{i-1}$, insert $b_i$ in the middle, i.e., $a_i = b_i$, $a_{i-0.5} = b_i$, and $a_{i-1} = b_{i-1}$.

Java

C++

Calculate the length needed for placing the laid horizontally. For example, if you have [4, 2, 1] and you want to place them horizontally, it would be [3, 2, 1, 1], denoted as array b.

You can observe that for any x, if there are c indices i such that $a_i \geq x$, then the corresponding $b_x$ is equal to c.

For example, in [4, 2, 1], there are 3 elements greater than or equal to 1, so $b_1 = 3$. Thus, b = [3, 2, 1].

(In comparison to the actual horizontal placement, the last '1' was actually omitted, but it's not significant. This is because for $a_1$ to match n, it must be equal to n.)

Finally, compare the original array a with b. If they are the same, it's a YES; otherwise, it's a NO.

Java

C++

Clearly, if there is only one of each ice cream ball type, you can make $C_x^2$ different ice creams for x types.

Since the problem allows combining two same ice cream balls into one, each type of ice cream ball can have at most two. Moreover, for every type that has two ice cream balls, you can create one more type of ice cream.

So, x types of ice cream balls (any quantity) can result in a minimum of $C_x^2$ different ice creams and a maximum of $C_x^2+x$ different ice creams.

You can perform binary search to find the minimum and maximum x needed to achieve the desired number of different ice creams.

Then, you can brute force each x within this range to find the minimum number of ice cream balls required.

Java

C++

Firstly, as long as you are not completely avoiding watching movies, the total mood points deducted for watching all movies, regardless of $a_i$, will only depend on the day of the last movie you watch.

For example, if you watch the last movie on the third day, you only need to subtract $3 \cdot d$. After that, it's irrelevant to d.

What's left to consider is the sum of several $a_i$ values. Naturally, you'd want to choose a positive $a_i$ as large as possible.

So, you can enumerate the day x when you watch the last movie. Then, you can use a priority queue to sort all $a_i$ values that were before the day x . If the length of the priority queue exceeds m, you discard the smallest values.

*One issue is that $a_x$ might not be chosen as the last movie day. It's possible that the last movie day is actually y. However, since you deducted an extra $(x-y) \cdot d$, this will not lead to a better result than before.*

Java

C++

The time limit is 4 seconds, and Codeforces can handle more than $10^6$ operations per second. So, this $O(n \cdot \sum{s}) = O(n^2 \cdot s)$ solution with approximately $10^8$ operations can pass comfortably and runs in about 800ms (even in Java).

First, you can solve a 0/1 knapsack problem to find all possible total sums of these n monsters.

Once you have this total, you can binary search for the required time $ans$. Assuming it takes $ans$ seconds, you can then binary search for the largest combination that is less than $ans \cdot w$. If the remaining monsters can be defeated in less than $ans \cdot f$ time, you can defeat all monsters in $ans$ seconds.

Java

C++

Firstly, Java users should be cautious when using Java features, as it can lead to TLE (Time Limit Exceeded). For example, consider the following:

It's easy to observe that, no matter what, the difference between adjacent numbers in the sorted array will reduce by 1 each time.

Also, regardless of the transformation, smaller numbers will never exceed larger numbers; at most, they will become the same and merge.

So, if two adjacent close numbers need to become the same, and their difference is x, it will always take x steps.

Assuming b is the sorted version of a (just for explanatory purposes, not necessary to compute), the problem boils down to finding the maximum $b_i - b_{i-1}$ and the maximum $a_i$.

Maintaining the maximum $a_i$ is straightforward with a set.

In each operation, remove the old $a_i$ and add the new $a_i$ to the set.

You can also maintain the maximum $b_i - b_{i-1}$ using a set.

In each operation, find the two adjacent numbers of $a_i$ (denoted as pre_left and pre_right) and remove $a_i - \text{pre_left}$ and $\text{pre_right} - a_i$ from the set. Then, add $\text{pre_right} - \text{pre_left}$ to the set. So we effectively removing the old $a_i$.

Next, in the set that maintains the size of $a_i$, find the two adjacent numbers of the new $a_i$ (denoted as next_left and next_right) and remove $\text{next_right} - \text{next_left}$. Adding $a_i - \text{next_left}$ and $\text{next_right} - a_i$ to the set. So we effectively adding the new $a_i$.

Note that since the values of $a_i$ can be repeated, and hence $b_i - b_{i-1}$ can also be repeated, both sets should maintain both the value and the index.

Java

C++

]]>Suppose you need to sort an array or have a TreeSet that requires a specific order.

Let's take the following class as an example:

```
class Node {
int index, value;
public Node(int index, int value) {
this.index = index;
this.value = value;
}
}
```

When defining a TreeSet for this class, you can use one of the following two approaches:

```
new TreeSet<>(Comparator.comparingInt((Node o) -> o.value).thenComparingInt(o -> o.index));
```

```
new TreeSet<>((o1, o2) -> o1.value != o2.value ? Integer.compare(o1.value, o2.value) : Integer.compare(o1.index, o2.index));
```

Clearly, the first approach is elegant, but unfortunately, it's slower. It's recommended to use the second approach. In fact, the second approach can still be slow; you can use `o1.value - o2.value`

directly. You could even consider combining two integers into a long for comparison.

In short, the Java is highly engineered, with many checks and a deep call stack.

Methods like comparingX often require passing a lambda function, and their source code typically includes null checks. If you also need to use thenComparingX, the call stack becomes even deeper. The call stack for the first approach is approximately as follows:

```
treeSet -> comparator_2 -> comparator_1 -> lambda_1 -> lambda_2
```

On the other hand, the call stack for the second approach is simpler:

```
treeSet -> comparator
```

In cases where the necessary logic is the same, the first approach results in a lot of additional function calls, making it slower.

The data size is very small, all within 100. So, you can simply simulate pouring water from the larger vessel to the smaller one step by step.

Java

C++

Obviously, once you pass a trap, you must return after $\lfloor{\frac{s-1}{2}}\rfloor$ steps.

So, we only need to calculate the maximum value of k for each trap that is stepped on and returned to, and then take the minimum.

Java

C++

Assuming a and b exist, and their greatest common divisor is g. Then, $(a+b)=g\cdot(\frac a g + \frac b g)$. In other words, a+b must be divisible by g, and $\frac a g + \frac b g > 1$.

So, we only need to find any composite number between l and r. In this way, we choose one of its factors as g, set a=g, and $b=\frac x {g} - 1$.

Java

C++

Clearly, the least common multiple of x and y corresponds to a number that does not affect the final result. So, we just need to make the remaining numbers corresponding to x as large as possible, and the remaining numbers corresponding to y as small as possible.

Java

C++

I'm not entirely sure if there is a simpler way to solve this problem. I directly used a segment tree, maintaining the XOR sum of intervals corresponding to g=0 and g=1.

Each update operation involves swapping the XOR sums of g=0 and g=1. Finally, you can query the XOR sum of the entire interval.

Java

C++

Consider the fear value $a_i$ of animal i as a directed edge from i to $a_i$. This gives us a directed graph, where each point in this graph can have only one outgoing edge and any number of incoming edges. The structure of this graph should roughly resemble something like the following:

Naturally, for points with no incoming edges, removing them has no effect on the income. So, we prioritize removing points with an in-degree of 0.

This leaves us with several cycles.

For these cycles, we choose to remove the point with the minimum value. Therefore, we find the point x with the minimum value on the cycle and remove $a_x$.

Java

C++

First, if $a_i\geq 2$, then choosing multiplication over addition is obviously better. So, the fundamental problem lies in how to handle $a_i=1$.

It's clear that the 1s at the far left and far right must be excluded from the range [l, r]. So, we can assume that the beginning and end of the array are not 1s.

In that case, the remaining situations are [2,1,1,1,1,1,1,2] and [3,1,3], where there are many 1s in between. In such cases, addition is better than multiplication.

However, the maximum number of 1s in between can be at most $10^5$, which means the total sum of 1s in between can be at most $10^5$. So, as long as there are approximately 20 occurrences of 2, multiplication will always be better no matter how you choose the range (excluding the leftmost and rightmost 1s). This is because $2^{20}>10^5$.

So, the general idea is that when the product of the non-1 numbers in the array reaches a certain threshold, you can simply choose the largest possible interval for l and r (excluding the leftmost and rightmost 1s). Otherwise, you can brute force the positions of l and r.

It's worth noting that the threshold should not be $10^5$. For example, it could be something like $[2, 1, 1, ..., 1, 5\cdot 10^4]$. The result of choosing multiplication would be $10^5$, but the result of addition would be $(10^5-2)+2+5\cdot 10^4=1.5 \cdot 10^5$.

The exact threshold value I didn't calculated, it's simply given as $10^6$. In any case, $2^{32} > 10^9$, which means that it's less than 32 numbers no matter what.

Java

C++

]]>⚠️

I am personally more conservative and default to a pessimistic estimate, so there may be an underestimate of ChatGPT.

Recently, ChatGPT has become very popular, so I also want to try it out. I chose to try it out on finance and code, which are the areas I care about most.

The entire trial was short and mainly focused on having ChatGPT calculate the IRR (Internal Rate of Return) and help me write code to calculate the IRR.

The conclusion is that ChatGPT is powerful but still has a lot of room for improvement.

First, I tried to ask about the price of a fund a month ago and the stock price of a company. Although this information is publicly available, I felt that ChatGPT was considering legal risks, so it did not provide this data (or if it had the data, it did not use it).

Then, I asked ChatGPT questions about asset allocation strategies. Perhaps because these are already considered general investment advice, ChatGPT did not dodge and gave relatively reasonable answers.

Then I tried to ask ChatGPT about a specific asset allocation scenario and asked it to help me calculate the internal rate of return (IRR).

First, I asked ChatGPT if it understood the meaning of IRR. It answered that it understood and was willing to calculate for me. Next, I provided a cash flow sample from last year and asked it to calculate the IRR for the whole year of last year. Interestingly, the Excel operation method it gave seemed to have no major problems and it understood my language description. However, the result it gave was completely wrong. It is obvious that the correct result should be less than 5%.

Next, I asked ChatGPT to provide Java code to calculate IRR. So he gave me the following code:

```
import java.math.BigDecimal;
import java.math.RoundingMode;
public class IRRCalculator {
public static BigDecimal calculateIRR(double[] cashFlows, double accuracy) {
BigDecimal irr = new BigDecimal(0.10);
BigDecimal x1 = new BigDecimal(0.0);
BigDecimal x2 = new BigDecimal(1.0);
while (irr.compareTo(x1) == 1 && irr.compareTo(x2) == -1) {
BigDecimal fx = new BigDecimal(0.0);
BigDecimal dfx = new BigDecimal(0.0);
for (int i = 0; i < cashFlows.length; i++) {
BigDecimal x = irr.add(new BigDecimal(1.0)).pow(i);
fx = fx.add(new BigDecimal(cashFlows[i]).divide(x, 10, RoundingMode.HALF_UP));
dfx = dfx.add(new BigDecimal(cashFlows[i] * i).divide(x.multiply(irr), 10, RoundingMode.HALF_UP));
}
BigDecimal tmp = irr.subtract(fx.divide(dfx, 10, RoundingMode.HALF_UP));
x1 = irr;
irr = tmp;
if (irr.compareTo(new BigDecimal(0.0)) == -1) {
irr = x1.multiply(new BigDecimal(0.5));
x2 = x1;
}
}
return irr.setScale(6, RoundingMode.HALF_UP);
}
public static void main(String[] args) {
double[] cashFlows = new double[]{-100, 20, 10, 0, 0, 0, 10, 0, 105};
BigDecimal irr = calculateIRR(cashFlows, 0.00001);
System.out.println("IRR: " + irr);
}
}
```

There are many problems with this code:

- The calculated IRR value is 3.148211%, which is obviously incorrect, and the correct result should be around 0.4%. The 10% calculated by ChatGPT earlier is really far off.
- In the code, the parameter "accuracy" is not used.
- The x1 and x2 as the range of IRR values have serious problems. Because the code uses 1 + IRR to calculate, the value of IRR should be negative in the case of loss. However, due to x2 being set as the upper limit, the code can only calculate situations where the return rate is between 0% and 100%.
- The code uses an enumeration method instead of a binary method, which may result in low efficiency.

So I warned ChatGPT of the problems in this code and asked him to modify it. As a result, he provided me with the following code:

```
import java.math.BigDecimal;
public class IRRCalculator {
private static final BigDecimal TOLERANCE = BigDecimal.valueOf(0.00001);
private static final int MAX_ITER = 100;
private final double[] cashFlows;
public IRRCalculator(double[] cashFlows) {
this.cashFlows = cashFlows;
}
public double calculateIRR(double accuracy) {
double x1 = 0.0;
double x2 = 1.0;
double fx1 = npv(x1);
while (Math.abs(x2 - x1) > accuracy) {
double fx2 = npv(x2);
if (fx1 * fx2 < 0.0) {
break;
}
x1 = x2;
fx1 = fx2;
x2 *= 2;
}
double irr = 0.0;
for (int i = 0; i < MAX_ITER; i++) {
irr = (x1 + x2) / 2;
double npv = npv(irr);
if (Math.abs(npv) < TOLERANCE.doubleValue()) {
break;
}
if (npv < 0.0) {
x1 = irr;
} else {
x2 = irr;
}
}
return irr;
}
private double npv(double irr) {
double npv = 0.0;
for (int i = 0; i < cashFlows.length; i++) {
npv += cashFlows[i] / Math.pow(1.0 + irr, i + 1);
}
return npv;
}
}
```

The result is that the problems with this code are even worse.

- The results are completely incorrect. Using the previous data, the calculated IRR is directly 100%.
- Ignored my third requirement. To be honest, this requirement is indeed a bit difficult to implement.
- I have no idea what this code is doing. But the code for calculating NPV that he wrote looks good.

Firstly, it is amazing that ChatGPT understands my intentions. However, as of now, it still has a long way to go.

The accuracy is still far from satisfactory, and it can only serve as an auxiliary tool for professional issues. (I don't think "barely" is too harsh a word to use.)

Firstly, if I am not familiar with IRR or the calculation process, a 10% IRR might not seem unreasonable to me. This means that to judge the accuracy of ChatGPT's results, I need to have a sufficient understanding of IRR.

However, if I have a sufficient understanding of IRR, ChatGPT may not have an advantage over other specialized calculators.

Regarding ChatGPT's code writing, I think there is still a long way to go.

Even if I make a optimistic estimate, its first IRR code considers the calculation of IRR under a given accuracy, and it is not much worse than the binary method. It also considers that the enumeration method is more readable, and we temporarily do not consider its correctness. However, this should not reduce my requirements.

Because the considerations of boundaries and complexity analysis must be fully understood in the context of "binary or enumeration method" and "possible value range" before using the code without problems.

As a result, I feel that my requirements have become even higher. I need to know not only how to write, but also to be vigilant at all times.

Obviously each number can only be replaced by the same letter, so we only need to open a Map to check whether any number has been replaced by two different letters.

Java

C++

Count the number and sum of odd and even numbers. For each query, we just need to multiply x by the number and add it to the sum.

Also note that when x is odd, the parity is changed. Therefore, the number and sum need to be added to another type.

Java

C++

Just do the simulation and record the indexes of all the green lights. Then check from left to right. As the pointer moves to the right, the corresponding green lights' pointer will gradually move to the right.

When it is found that the pointer of green light can no longer move to the right, it means that it needs to enter the next cycle.

Java

C++

First look at how many 2s are in all a[i], and then check how many 2s are in each i.

If there are not enough 2 in all a[i], then sort i, and choose i with more 2 as much as possible.

Java

C++

Obviously, if the condition is changed to $a \leq x \leq c$ and $b \leq y \leq d$, then just make x=a, y=b directly.

Therefore, it is naturally conceivable that we multiply x and y by a number as small as possible. For example 2.

Further we can find that if the initial values of x and y are a and b respectively. We give y a factor of x, give x a factor of y, and then multiply the reduced number by a number as small as possible to make it larger than its original value, which will give a better result than multiplying directly by 2.

For example, a is divisible by 2 and b is divisible by 3. Then exchange factors 2 and 3, so $x=\frac 3 2 \cdot a$, $y=\frac 2 3 \cdot b$. So x only increases by a factor of 1.5. y needs to be multiplied by some number to exceed b.

So we enumerate the cases where all the factors of x and y are exchanged with each other.

Java

C++

Obviously, if mex is greater than med, then all values smaller than med() must appear. That is to say, all numbers before the median must all appear, and other numbers don't matter.

So for a given length, the result of its med() must be fixed, and all values less than or equal to med() must appear.

So we only need to enumerate the length. For each length, we calculate the minimum interval that contains all the numbers that must appear, and then calculate how many ways to expand to the specified length based on this minimum interval.

Java

C++

]]>💡

The solution to this problem uses a more complex segment tree solution instead of a simpler greedy one.

- Sorts b while preserving b's original index.
- Count the number of available numbers for each $b_i$, and also count how many pending number of it. From these two values we can calculate a "number of redundancy".
- We find the smallest $b_i$ for which "number of redundancy" is 0.
- Find the j with the smallest original index among all $b_j$ from 0 to i. And assign the currently smallest available number to $b_j$.
- According to the allocation of 4, repeat #2 and subsequent steps until all numbers are allocated.

**Let's start with #3 first.**

Consider the number of numbers that can be assigned for each $b_i$.

For example, when $b=[3, 4, 7, 8]$, there are only 2 numbers that can be used for 3 and 4 which is 1 and 2. And there are 4 numbers that can be used for 7 and 8: 1, 2 , 5, 6.

It is not difficult to find that the two numbers that can be used by 3 and 4 must be assigned to 3 and 4, but not to 7 and 8. This is because 3 and 4 have two pending number, and 3 and 4 have exactly 2 numbers available.

Furthermore, we found that after the allocation of 3 and 4, there are only 2 usable numbers left for 7 and 8, but also only 2 pending numbers for 7 and 8 are left. (Initially there are 4 numbers that can be used for 7 and 8, and there are also 4 pending numbers).

Therefore, we can draw a conclusion: the allocation of small numbers will affect large numbers, and we should give priority to the smaller $b_i$ whose "number of redundancy" is 0.

For example $[3, 4, 7, 8]$, although the "number of redundancy" of 4 and 8 are both 0, but in #3, we prioritize 4 instead of 8.

**Then we go back to #2.**

We only need to consider how many $b_i$ that are smaller than itself is not allocated. So 3 actually only has 1 pending number, 4 has 2, 7 has 3, and 8 has 4.

In other words, 3 does not consider the situation of 4, and 7 does not consider the situation of 8.

Because if we change b to $[3, 5, 7, 8]$, 3 and 5 will not need to be processed in advance. So when calculating pending item, we only need to calculate how many $b_i$ are less than or equal to the current value.

**Let's move on to #4.**

We already know that 1 and 2 need to assign to 3 and 4. The next question is where to allocate 1. Obviously this depends on the original index which retained by #1.

We assign 1 to the smallest original index. That is, if the original 4 is in front of the 3, it is assigned to 4.

It should be noted here that after we have allocated 1, we cannot immediately allocate the remaining number (only 2 in this example). Because after an allocation, the "number of redundancy" of the remaining numbers may change.

For example, after 1 is allocated to 4, the "number of redundancy" of 3 changes from 1 to 0, because there are 2 numbers available for 3 but only 1 pending number originally. After 1 is allocated to 4, only 2 is left for 3.

This is more obvious in Example 6: $b=[8, 7, 4, 5]$ (after sorting $[4, 5, 7, 8]$). At the beginning, only the "number of redundancy" of 8 is 0, and then exactly 8 is the number with the smallest original index. But after 1 is allocated to 8, 2 cannot be allocated to the remaining number with the smallest original index(that is, 7), but to go back to #2 and process 4 and 5 in advance.

Let's take sample 6 as an example: $b=[8, 7, 4, 5]$.

#1, we sort and keep the original index. We get $[(4: 2), (5: 3), (7: 1), (8: 0)]$.

#2, we calculate the current "number of redundancy". We get $[(4, 2, 2), (5, 3, 1), (7, 1, 1), (8, 0, 0)]$

For example, 5 has [1,2,3] a total of 3 number is available, minus 2 pending numbers, so there is 1 "number of redundancy".

#3，only one "number of redundancy" is 0: (8, 0, 0).

#4, among all the numbers before 8, the smallest original index is 8. So we assign 1 to 8.

Repeat #2, the "number of redundancy" after update is: $[(4, 2, 1), (5, 3, 0), (7, 1, 0)]$

Here, because 8 has been allocated, it is deleted. At the same time 1 is no longer available, so the remaining "number of redundancy" are all decremented by 1.

#3, at this time the "number of redundancy" of 5 and 7 are both 0. We choose the smaller 5 to continue processing.

#4, among all numbers before 5 (referring to 4 and 5, excluding 7), the smallest original index is 4. So we assign 2 to 4.

Repeat #2 to get $[(5, 3, 0), (7, 1, 0)]$

Here, only 3 is left for 5 to be allocated, and since 4 is deleted, the number of pending item for 5 also becomes 1. So the "number of redundancy" of 5 is still 0.

#3 and #4, assign 3 to 5.

Repeat #2, #3, #4, assign 6 to 7.

Obviously, at any time, once the "number of redundancy" is less than 0, there must be no solution.

Look at #4 first, its essence is to find the minimum value of the original index within the interval from "0" to "$i$ obtained by #3". We immediately thought that the segment tree maintains the minimum value of the original index of the range.

For #3, we only need to maintain the minimum value of the "number of redundancy" of the range, plus the binary search, find the smallest $i$ of the subscript so that the minimum value of the "number of redundancy" of the interval is 0. It can also be implement through the segment tree.

Then consider the case of deleting the allocated $b_i$.

Note that all the minimum values of range are maintained on the segment tree, so after the number is allocated, we can change the value of its original index and the "number of redundancy" to infinity. In this way, this $i$ will never appear in subsequent process.

Finally consider the effect of assigning a number x to $b_i$ on the "number of redundancy" of other $b_j$.

For $b_j$ that is larger than $b_i$ itself, it does not actually affect the "number of redundancy". Because these $b_j$ will reduce an available number and also reduce a pending number. (far right)

For $b_j$ smaller than x, it will not be affected. Because this value is impossible to assign to them. (far left)

So in essence only when $b_j$ is between x and $b_i$, the "number of redundancy" will be reduced by 1. Because the available number of $b_j$ is reduced by 1, but the pending number will not be reduced. (middle)

But this is actually an range modification. Therefore, it can still be implemented with a segment tree.

Therefore, the entire implementation is to use the segment tree to maintain the minimum value of the original index and the minimum value of the "number of redundancy" of the range, and then find out the target i of #3 and the position of x with binary search.

Java

C++

]]>After sorting directly, see if the sum of the first two numbers is equal to the third number.

Java

C++

Sort directly, and then see if there are two adjacent numbers are equal.

Java

C++

Scan each row to see if you can find a whole row that is R. If so, then the last one painted must be R, otherwise it must be B. (You can’t paint nothing at all)

Java

C++

First, we found that the data range is only 1000. And the numbers that are relatively prime within 1000 can be found directly by $10^6$.

We put the found coprime numbers in 1000 lists, and for the $i$th list, only put all the numbers coprime with $i$ .

Then, for each number that appears in the array $a$, we record the largest index that appears, which is marked as max_pos. (If the result really chooses this number, it is better to choose a larger index than a smaller one.)

Finally, for each number $i$ that appears, go to the list at the beginning to find $j$ that is coprime with it, and then check whether the max_pos of $j$ exists, and then calculate the result of this pair, and finally take the largest one.

Java

C++

Obviously, binary search.

According to the array a, maintain a array: max. Where max[i] is the maximum value of a[i] from 0 to i. That is to jump to the i-th level, the step with the largest span that will be encountered in the process.

Also maintain a prefix sum of a.

Therefore, according to the input k, just binary search on the max array. The use the found i, and use the previously maintained prefix sum array to get the result.

Please note that the value range of a can reach $10^9$, so pay attention to the prefix sum array and use 64-bit int.

Java

C++

We count the number of letters in s and t.

Obviously, if there is a non-a letter in the string t, then directly put the non-a subtitle at the top, and at the same time, s puts a at the top.

Now t is full of letters a. Now suppose there is a letter in s that is not a.

If the number of letters a in s is less than t, then obviously s is greater than t anyway.

If the number of a in s is more than or equal to t, then s must be longer than t, so s is still larger than t.

In summary, as long as there is a letter other than a in s, then s must be greater than t.

In the end, there are only scenarios where both s and t are a's. Then just compare the length directly.

Java

C++

First of all, it is obvious that the array b is single-incremented. At the same time, according to the value range of a, b has at most 64 different numbers. Because it is nothing more than ORing one bit into 1 each time. (In fact, there are 32, but I don’t want to think about long or int, and it’s the same to write 64 directly)

Therefore, for the arrangement of array a, only the arrangement of the first 64 bits will affect the result. (If b is still increasing after 64 bits, you can put the corresponding a in front, and the result must be better)

So directly enumerate the first 64 numbers with brute force. For the i-th number, we only need to ensure that the corresponding b[i] is as large as possible.

Java

C++

]]>⚠️

This article comes with a lot of pictures

Nextcloud is an open source network storage system developed based on Owncloud. Nextcloud mainly provides good privacy while providing comprehensive functions.

The functions of the cloud storage function will not be described too much here, just like most cloud storage on the market. The files in the cloud storage can be accessed through the web page. There is also a client that can automatically synchronize files in the local directory to the cloud storage.

It can probably be compared to Baidu Cloud or Google drive.

What appeals to me most about Nextcloud is its chat function, because communication between family members often requires a high level of privacy. Except video chat, which has a particularly high requirement for network latency, so I can't use it. The supporting for daily functions such as general chat and sending pictures is very good.

Nextcloud provides a plug-in called "Talk". After installing it, you can use the chat function on the web page.

From the screenshot above, I definitely have to mask the chat content between me and my family. The message in the picture is actually from my monitoring and alarms service.

The bad news is that Grafana doesn't natively support Nextcloud Talk. But the good news is that Nextcloud provides enough API, here I built a service to forward Grafana's webhook. Moreover, Nextcloud also provides very detailed integration documentation.

What's more, Nextcloud provides clients for IOS and Android.

As for a series of tools such as WeChat, whatsapp, etc., they are basically not used for internal communication within the family. As for how to convince family members to use it, each in their own way.

Nextcloud can play videos on web pages or clients. Although it is not as good as some professional home theater services, it can meet basic needs.

The advantage is that as long as you can access your own Nextcloud service, and as long as you have enough bandwidth, you can play it wherever you are. And Nextcloud itself is a cloud disk. If you want to share videos, you only need one click.

The downside is that Nextcloud itself doesn't have the usual theater features like continuous playing. However, Nextcloud supports WebDav, so if you want to expand it is not difficult.

Sharing is a dangerous thing. Originally, I downloaded some video to watch that it is not a very big issue, but it is easy to have copyright problems once it is shared (based on China Law). In particular, family and friends may not have these sense. So on the one hand, explanation for family member is very important. On the other hand, mandatory sharing strategies are also necessary.

Nextcloud can force shares to expire, and force all shares to set a password.

For mobile devices, whether it is the Nextcloud client or the chat client, fingerprint password protection can be set.

(Cannot take screenshots due to phone security settings)

Nextcloud can set mandatory two-factor authentication, enable server-side encryption, set password policy, and automatically expire.

At the same time, Nextcloud also has plugins for LDAP or SAML. You can use this to log in to all services with a unified account within the family.

Nextcloud also provides photo map functionality (also a plugin called Map). Place various photos on the map based on the geographic location of the images.

While it seems that Apple offers a similar feature, so does Google. However, since Nextcloud already can solve the problem with one service, why should I share my information with a third party service?

But this is a matter of opinion. I have not convince nor tried to convince family members to use this service. After all, this one is different from chatting. The archiving of photos is a personal matter and there is no point to interfere with each other.

On Windows, the Nextcloud client also provides virtual file support (it doesn't seem to support Macs). The general effect is that you can choose to only synchronize the file list from the server to the local, and only download the file from the server when it is actually running or used.

First, Nextcloud can be started directly with pure Docker (including the database and the service itself). So, as long as the corresponding files are backed up, the entire service is backed up.

My personal approach is to deploy it directly on K8s and mount the Longhorn drive, which is a RAID 1. There is basically no service interruption due to hard disk damage. Also Longhorn can back up the hard drive regularly.

Therefore, once Longhorn's RAID 1 itself provides high reliability, and then regularly backing up Longhorn, it is basically very reliable.

Personally think: Nice!

]]>Although it is true that this self-signed certificate can be trusted on the client side. The disadvantage of this is that this self-signed certificate needs to be trusted once in various client devices. (For example, Microsoft's official Remote Desktop 8 in Android).

Because I use the certificate of let's encrypt, I note down how to set the certificate signed by let's encrypt to the server of windows remote desktop.

A Let's encrypt certificate. I won't go into details about how to generate a certificate from Let's encrypt here. Here we just synchronize what the certificate should be. First of all, the certificates should all be pem certificates, and the certificate should obviously contain the domain name of the server machine. For example `windows.xloypaypa.pub`

.

The certificates folder should contain the following:

```
.
├── cert.pem
├── chain.pem
├── fullchain.pem
├── privkey.pem
└── README
```

Pfx certificate is a certificate that contains both a public key and a private key. I personally don't know whether Windows Remote Desktop can use be configurated directly through a pair of pem certificate. But it is not difficult to convert to pfx anyway. All it takes is one line of command:

`openssl pkcs12 -export -out certificate.pfx -inkey privkey.pem -in cert.pem -certfile chain.pem`

And just give a password for above command.

Copy the pfx certificate to the server and double-click. Then follow the instructions to import the certificate into "Certificates" in the "Personal" folder.

- Be careful to install the certificate to the local computer, not the current user.
- Put the certificate in "Personal"

Once finished, it should look like this (there is a certificate just now in the personal folder):

Double-click the certificate on the right side of the certificate management to see the details of the certificate and get the thumbprint:

Then run in Powershell as administrator:

`wmic /namespace:\\root\cimv2\TerminalServices PATH Win32_TSGeneralSetting Set SSLCertificateSHA1Hash="<The certificate fingerprint obtained above>"`

At this point, the certificate configuration for Windows Remote Desktop Services is completed.

]]>💡

The main content of this article: A general introduction to how to integrate openstreetmap through leaflet under the conditions of typescript+react.

Instead of asking why choose openstreetmap, better to ask why not choose well-known maps such as Baidu Maps and Google Maps?

Because openstreetmap is open source and free, you can refer to their about page for details.

Taking Google Map as an example, if you integrate with Google Map, you will need to register for GCP, and you will be charged after a certain amount of usage is exceeded. For details, please refer to this document.

Other maps, such as ***du, are even less reliable than Google. In fact, I also want to avoid the use of services from commercial companies as much as possible.

Another problem is the consideration of reliability. There are two main aspects: 1. How many people use it? 2. is the stability reliable or not.

From this page, we can see that, in fact, this map is still used by some European and American governments, and Wikipedia and so on. So it is used by many people. In addition, the Nextcloud (I'm using) is actually integrated with this map also. So I think that although the number of users of openstreetmap is not as good as Google, it still have some users.

As for stability, first of all, I believe that European and American government departments must have also considered this issue. Secondly, my own Nextcloud also can be considered a deep usage. From the experience of about 4-5 years, I have never seen it broken.

Therefore, from the perspective of reducing commercial dependence, from the perspective of reliability, and after comprehensive consideration, I choose openstreetmap.

Because google search "react openstreetmap", the first result is React Leaflet

Leaflet actually integrated openstreetmap by default, so it is easier to integrate. However, Leaflet's documentation is still a little tricky.

First, a demo implemented by someone else that can work.

For installation, the official documentation is very clear. In fact, it just install three libraries leaflet, react-leaflet, @types/leaflet.

So just yarn add.

After installing the dependencies, if you implemented directly according to the "Setup" code in the official documentation, you will probably get the following picture:

The root cause is that the css of leaflet is not imported. There are two ways to import it. One option is: importing it in the index.html (referring the demo above). So just download it directly from the leaflet (external).

The other option is to directly import the css in the package to tsx file:

```
import 'leaflet/dist/leaflet.css';
```

After the import of css, the entire map can be displayed.

Similar problems like Maker icons missing, etc. it can be solved by similar solutions.

```
import icon from 'leaflet/dist/images/marker-icon.png';
import iconShadow from 'leaflet/dist/images/marker-shadow.png';
let DefaultIcon = L.icon({
iconUrl: icon,
shadowUrl: iconShadow
});
L.Marker.prototype.options.icon = DefaultIcon;
```

Have to say that the code on the official website is very nice.

Copy the code from the official website:

```
<MapContainer center={[51.505, -0.09]} zoom={13} scrollWheelZoom={false}>
<TileLayer
attribution='© <a href="https://www.openstreetmap.org/copyright">OpenStreetMap</a> contributors'
url="https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png"
/>
<Marker position={[51.505, -0.09]}>
<Popup>
A pretty CSS3 popup. <br /> Easily customizable.
</Popup>
</Marker>
</MapContainer>
```

In short, it is a simple and easy-to-use map tool, but the documentation is a little missing.

]]>💡

This article mainly solves the problem of installing ffmpeg and other dependencies in the Nextcloud which installed by docker, but not willing to write a new Dockerfile or use a third-party image.

```
docker run -p 80:80 --entrypoint /bin/sh nextcloud -c "apt-get update && apt-get install -y ffmpeg && /entrypoint.sh apache2-foreground"
```

By the following command:

```
docker image inspect nextcloud
```

We can see that the official image of Nextcloud is started by running /entrypoint.sh and giving a parameter "apache2-foreground"

Therefore, in order to install ffmpeg before Nextcloud starts, we only need to change the entrypoint to /bin/sh, then install the dependencies in CMD, and finally start the Nextcloud script.

Pros: No need to customize the image, it is more convenient for Nextcloud running in K8S. Every time you only need to directly pull the official image, you can ensure that your Nextcloud is the latest official version, and the dependencies will be installed automatically.

Cons: ffmpeg needs to be downloaded once every time. The startup speed is strongly dependent on the network speed.

Pros: Simple, quick and easy to operate. Through the code repository, Dockerfile can be used to implement version management of dependencies.

Cons: Every time you upgrade nextcloud, you need to rebuild image. And you need to have your own image repository (such as registering docker hub).

Pros: Super easy.

Cons: The version update is not timely. Third-party mirrors may have security risks.

Pros: There is nothing difficult in the world, as long as you are willing to give up. I use ffmpeg to make preview of video, and final experience is just so so.

Cons: You will lose a feature.

Obviously if (r, c) is originally black, then output 0. Output 1 if row r or column c is black. Output 2 if black is present. If not even black, output -1.

Java

C++

Obviously Rahul will choose the middle seat as much as possible, and Tina will hide in the 4 corners as much as possible. And when the middle position is painted pink, then Rahul will choose the position closest to the 4 corners (the maximum distance from the 4 corners).

So we calculate the distance from each point to the 4 corners, and then sort and output.

Java

C++

Obviously, the edge weight of each edge must be a prime number, and there must be exactly one edge between two adjacent edges whose edge weight is 2. Therefore, there is a solution if and only if the tree is actually a chain.

Therefore, it is only need to verify whether the tree is a chain, and then fill in 2 and 3 in turn.

Java

C++

For a certain number X, if the GCD of all multiples of X in the final array is X, then X must be in the final array also.

Because if these numbers are multiples of X, there are only two possibilities for their GCD: 1. equal to X; 2. greater than X.

For the case greater than X, we give an example to illustrate. [8, 12, 13]. When X is 2, we find that the GCD is 4. Then it means that 4 should also be in the final result array. But not 2.

So we enumerate each number in order from large to small, and check it as above.

Java

C++

Clearly a DP problem. It's just that the code implementation is slightly cumbersome.

Since k is at most $10^5$, there are actually only at most $2\cdot 10^5 + 2$ points (include the start and end points) that have an impact.

So we can build a map based on these points, and then DP on the map.

We let DP[i][j] denote the minimum cost for the i-th floor to walk to the j-th room.

Obviously, if we know the final cost of floor a, we can update the value of DP[c][d] with DP[a][b] according to the situation of the ladder.

But at this time the value of DP[c][d] is not the final cost, because it is possible to go from DP[c][e] to DP[c][d] to obtain a smaller cost. So,$$\text{DP[c][d]} =

\begin{cases}

\text{dp[c][e]}+\text{x[c]}\cdot|d-e|, & d > e\\

\text{dp[c][e]}+\text{x[c]}\cdot|e-d|, & e > d

\end{cases}$$Therefore, we only need to traverse from left to right and from right to left twice to get the final value of DP[c][d].

In the end we just need to output the value of the end point.

Java

C++

]]>Count the number of Xs where each bit is 1. If a bit has more than half of 1's, then y is 1 in that bit, otherwise it's 0.

Java

C++

For any two identical numbers, at least, we can find l and r of length 1. Also quite naturally, we would want to extend the lengths of l and r.

Obviously, if the expansion to the right, the number on the right has more restriction; if it is expanded to the left, the number on the left has more restriction.

Therefore, two adjacent numbers that are the same must end up being longer than two non-adjacent numbers.

Therefore, we only need to record the position of the last occurrence of each number, so that the maximum length corresponding to the current number can be quickly calculated each time.

Java

C++

Obviously a DP problem.

dp[i][j] represents the minimum time required to reach the i-th road sign when j road signs are removed.

By definition, obviously dp[0][j]=0. It should be noted that here I also defined dp[0][2]=0, that is to say: the quota for deleting road signs is allowed to be wasted.

Then for i not equal to 0, dp[i][i - 1 - j + x]=min{dp[j][x]+a[j]*(d[i]-d[j])}.

That is, consider the minimum time to reach the i-th road sign and there are no other road signs from j to i. Obviously it takes a[j]*(d[i]-d[j]) time to get from j to i.

Among them, it is assumed that x road signs have been deleted when reaching j. Then at this time, plus deleting all road signs between i and j, a total of x+i-1-j road signs are deleted.

Finally, find the minimum value of dp[n+1].

Java

C++

]]>Obviously, output the maximum value minus the minimum value.

Java

C++

Just try to operate on each number in turn. As long as a feasible solution is found, then output YES.

Java

C++

For x from 0 to n, We count which a[i] can generate it.

So we find the x with the least possible i from 0 to n each time, and randomly find one of the i and perform several operations on it to get x.

If the process finds that an x has no optional i, output No.

Java

C++

Obviously, as long as there is only one letter in a string that appears an odd number of times, and all other letters appear an even number of times, then this string must be able to get a palindrome by adjusting the order.

Then we only need to count how many pairs of letters, such as aa, bb, cc. And these letter pairs are evenly distributed to k strings.

In the last remaining, we pick at most k letters, and they can be added to k strings as the only odd number of letters.

Java

C++

A good Trie and DP problem.

We binary search the initial value of x. We can find that for the median mid of all possible values of x, we give c such that mid+c is exactly divisible by n.

If x is on the left of mid, then $\lfloor\frac{x+c}{n}\rfloor<\lfloor\frac{mid+c}{n}\rfloor$. Otherwise $\lfloor\frac{x+c}{n}\rfloor=\lfloor\frac{mid+c}{n}\rfloor$.

The only trouble is to record each time c, and calculate the c for the current mid.

Java

C++

First, we first find the highest 1 of the final result. So add edges by the number of bits until all the nodes are being connected together.

The answer we assume at this point is a binary number with all 1s, like 111111. (Perhaps the OR of all sides less than 6 bits is not 111111, but we can ignore that for later practice.)

At this time, the highest bit 1 cannot become 0, because if this bit can be 0, it means that 5 bits are enough. If we must replace the highest bit with 0, then the higher-order edge is needed to supplement, and the value of such OR is larger.

So we start from the second highest bit, and try to change each bit to 0 in turn, and see whether it can still form a tree. That is, we try 101111 first, then 1?0111, where ? depends on the result of the first attempt.

This is done because, obviously, we want to have 0 as higher bits as possible. Therefore, when discussing whether a certain bit can be 0, the lower bits must be all 1 (to form a tree as much as possible).

Java

C++

]]>