[AMR][RefinementHandler] Main PR with sod and amr tests#1719
[AMR][RefinementHandler] Main PR with sod and amr tests#1719Akos299 wants to merge 44 commits intoShamrock-code:mainfrom
Conversation
|
Thanks @Akos299 for opening this PR! You can do multiple things directly here: Once the workflow completes a message will appear displaying informations related to the run. Also the PR gets automatically reviewed by gemini, you can: |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the Adaptive Mesh Refinement (AMR) capabilities of the RAMSES solver by introducing a new pseudo-gradient based refinement criterion. It includes comprehensive C++ backend modifications to support this new mode, alongside new Python examples that validate its functionality for common hydrodynamic test cases. The changes also refine the grid management process by enforcing the 2:1 refinement rule and updating internal data structures to better track and utilize AMR levels. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a new pseudo-gradient-based AMR refinement mode, including new PseudoGradientBased structs and related methods in SolverConfig.hpp and AMRGridRefinementHandler.hpp, along with corresponding Python bindings. It also adds new utility functions for pseudo-gradient calculation in SlopeLimitedGradientUtilities.hpp and a rho_primitive field to SolverStorage.hpp and Solver.cpp for use in the new AMR logic. The changes also include updates to ConsToPrimGas to handle this new field. Review comments highlight a critical bug where refine_list is moved multiple times, leading to undefined behavior, and several improvement opportunities such as removing unused parameters and redundant computations, clarifying confusing comments, addressing potential logical inconsistencies in err_min/err_max values, and suggesting more robust convergence checks for iteration loops in the AMR refinement process.
| enforce_two_to_one_refinement(std::move(refine_list)); | ||
|
|
||
| /////// enforce 2:1 for derefinement ////// | ||
| enforce_two_to_one_derefinement(std::move(derefine_list), std::move(refine_list)); |
There was a problem hiding this comment.
The refine_list is moved to enforce_two_to_one_refinement on line 1246 and then moved again to enforce_two_to_one_derefinement here. Moving an already moved object results in undefined behavior. If the content of refine_list is still needed after the first move, it should be passed by reference or copied.
| using namespace sham; | ||
| using namespace sham::details; | ||
|
|
||
| auto get_avg_neigh = [&](auto &graph_links, u32 dir) -> T { |
There was a problem hiding this comment.
|
|
||
|
|
||
| def rhovel_map(rmin, rmax): | ||
| rho = rho_map(rmin, rmax) |
| next_dt = model.evolve_once_override_time(t, dt) | ||
| if i == 0: | ||
| dic0 = convert_to_cell_coords(ctx.collect_data()) | ||
| dX0.append(dic0["ymax"][i] - dic0["ymin"][i]) |
There was a problem hiding this comment.
| auto get_coord = [](u32 i) -> std::array<u32, dim> { | ||
| constexpr u32 NsideBlockPow = 1; | ||
| constexpr u32 Nside = 1U << NsideBlockPow; | ||
| constexpr u32 side_size = Nside; | ||
| constexpr u32 block_size = shambase::pow_constexpr<dim>(Nside); | ||
|
|
||
| if constexpr (dim == 3) { | ||
| const u32 tmp = i >> NsideBlockPow; | ||
| // This line is why derefinement never happens | ||
| // return {i % Nside, (tmp) % Nside, (tmp ) >> NsideBlockPow}; | ||
| return {(tmp) >> NsideBlockPow, (tmp) % Nside, i % Nside}; | ||
| } |
| // // // enforce 2:1 at parent level | ||
| // /////////////////////////////////////////////////////////////////////////////////// | ||
|
|
||
| for (int it = 0; it < 3; it++) { |
| u64 id_patch, | ||
| shamrock::patch::Patch p, | ||
| shamrock::patch::PatchDataLayer &pdat, | ||
| Tscal err_min, |
| u64 id_patch, | ||
| shamrock::patch::Patch p, | ||
| shamrock::patch::PatchDataLayer &pdat, | ||
| Tscal err_min, |
| err_min = 0.25 | ||
| err_max = 0.15 |
This commit is the first in a series leading to a PR that updates the AMR algorithms. Specifically, it implements the 2:1 refinement consistency check.
0f59064 to
cc83d20
Compare
cc83d20 to
9f73d96
Compare
Save the beautifull AMR
| // for each siblings block, perform restriction from its 8 children cells | ||
| for (u32 pid = 0; pid < 8; pid++) { | ||
| auto rho_pid = rho_block[pid]; | ||
| auto rho_vel_pid = rho_vel_block[pid]; | ||
| auto rhoe_pid = rhoE_block[pid]; | ||
|
|
||
| for (u32 cell_id = 0; cell_id < AMRBlock::block_size; cell_id++) { | ||
| rho_block[cell_id] += acc.rho[old_blocks[pid] * AMRBlock::block_size + cell_id]; | ||
| rho_vel_block[cell_id] | ||
| += acc.rho_vel[old_blocks[pid] * AMRBlock::block_size + cell_id]; | ||
| rhoE_block[cell_id] | ||
| += acc.rhoE[old_blocks[pid] * AMRBlock::block_size + cell_id]; | ||
| rho_pid += acc.rho[old_blocks[pid] * AMRBlock::block_size + cell_id]; | ||
| rho_vel_pid += acc.rho_vel[old_blocks[pid] * AMRBlock::block_size + cell_id]; | ||
| rhoe_pid += acc.rhoE[old_blocks[pid] * AMRBlock::block_size + cell_id]; | ||
| } | ||
| } | ||
|
|
||
| for (u32 cell_id = 0; cell_id < AMRBlock::block_size; cell_id++) { | ||
| rho_block[cell_id] /= 8; | ||
| rho_vel_block[cell_id] /= 8; | ||
| rhoE_block[cell_id] /= 8; | ||
| rho_block[pid] = rho_pid * (1. / 8.); | ||
| rho_vel_block[pid] = rho_vel_pid * (1. / 8.); | ||
| rhoE_block[pid] = rhoe_pid * (1. / 8.); |
There was a problem hiding this comment.
I'm not sure why this was changed, was something wrong ?
| internal_refine_grid(shambase::DistributedData<sham::DeviceBuffer<u32>> &&refine_list) { | ||
|
|
||
| using namespace shamrock::patch; | ||
| internal_refine_grid(shambase::DistributedData<sham::DeviceBuffer<u32>> &&dd_refine_flags) { |
There was a problem hiding this comment.
Those renaming could be a sub PR also
|
i convert the PR to draft since it is not intended to be merged in one block and will be used mostly for stagging |
|
|
||
| struct Edges { | ||
| const shamrock::solvergraph::PatchDataFieldDDShared<T> &ghost_fields; | ||
| shamrock::solvergraph::PatchDataFieldDDShared<T> &ghost_fields; |
There was a problem hiding this comment.
| shamrock::solvergraph::PatchDataFieldDDShared<T> &ghost_fields; | |
| const shamrock::solvergraph::PatchDataFieldDDShared<T> &ghost_fields; |
| __internal_set_ro_edges({}); | ||
| __internal_set_rw_edges({ghost_fields, fields}); |
There was a problem hiding this comment.
| __internal_set_ro_edges({}); | |
| __internal_set_rw_edges({ghost_fields, fields}); | |
| __internal_set_ro_edges({ghost_fields}); | |
| __internal_set_rw_edges({ fields}); |
| fields.get_refs().for_each([&](u32 id_patch, PatchDataField<T> &field) { | ||
| // TODO: currently we guess the GZ size using the input, we should use in place ghost | ||
| // zones really ... | ||
| PatchDataField<T> temp_pdat(field.get_name(), field.get_nvar()); |
There was a problem hiding this comment.
unused
| PatchDataField<T> temp_pdat(field.get_name(), field.get_nvar()); |
Workflow reportworkflow report corresponding to commit d1e9d9f Light CI is enabled. This will only run the basic tests and not the full tests. Pre-commit check reportSome failures were detected in base source checks checks. ❌ trailing-whitespace❌ end-of-file-fixer❌ ruff-check❌ ruff-formatSuggested changesDetailed changes :diff --git a/examples/TO_MIGRATE/ramses/interacting_blast_wave.py b/examples/TO_MIGRATE/ramses/interacting_blast_wave.py
index 4ca5a6a7..ae5796b1 100644
--- a/examples/TO_MIGRATE/ramses/interacting_blast_wave.py
+++ b/examples/TO_MIGRATE/ramses/interacting_blast_wave.py
@@ -61,7 +61,6 @@ def run_1d_interacting_blast_wave_test(
p0_m = 0.01
p0_r = 100
-
def rho_map(rmin, rmax):
return 1
@@ -81,71 +80,76 @@ def run_1d_interacting_blast_wave_test(
model.set_field_value_lambda_f64("rhoetot", rhoetot_map)
model.set_field_value_lambda_f64_3("rhovel", rhovel_map)
-
-
def analysis():
results = []
for dt in timestep_lists:
model.evolve_until(dt)
- rho_vals = model.render_slice("rho", "f64", positions_plot)
+ rho_vals = model.render_slice("rho", "f64", positions_plot)
rhov_vals = model.render_slice("rhovel", "f64_3", positions_plot)
rhoetot_vals = model.render_slice("rhoetot", "f64", positions_plot)
vx = np.array(rhov_vals)[:, 0] / np.array(rho_vals)
P = (np.array(rhoetot_vals) - 0.5 * np.array(rho_vals) * vx**2) * (gamma - 1)
- e_int = P/((gamma - 1.) * np.array(rho_vals))
+ e_int = P / ((gamma - 1.0) * np.array(rho_vals))
results_dic = {
- "rho": np.array(rho_vals),
- "vx": np.array(vx),
- "P": np.array(P),
- "e_int":np.array(e_int)
- }
+ "rho": np.array(rho_vals),
+ "vx": np.array(vx),
+ "P": np.array(P),
+ "e_int": np.array(e_int),
+ }
results.append(results_dic)
-
- output = np.column_stack((np.array(rho_vals), np.array(vx), np.array(P), np.array(e_int)))
- filename= f"data_interacting_blast_wave_reso_{2*base*multx}_at_{dt}.txt"
- np.savetxt(os.path.join(output_folder,filename),
- output,
- fmt=["%.10f", "%.10f", "%.10f", "%.10f"],
- header="rho vx P e_int",
- )
+
+ output = np.column_stack(
+ (np.array(rho_vals), np.array(vx), np.array(P), np.array(e_int))
+ )
+ filename = f"data_interacting_blast_wave_reso_{2 * base * multx}_at_{dt}.txt"
+ np.savetxt(
+ os.path.join(output_folder, filename),
+ output,
+ fmt=["%.10f", "%.10f", "%.10f", "%.10f"],
+ header="rho vx P e_int",
+ )
return results
def plot_results(data):
arr_x = [x[0] for x in positions_plot]
for i, frame in enumerate(data):
fig, axs = plt.subplots(2, 2, figsize=(8, 8), dpi=100, constrained_layout=True)
-
- ## density
- axs[0,0].set_xlabel("$x$")
- axs[0,0].set_ylabel("$\\rho$")
- axs[0,0].grid(True, alpha=0.3)
- axs[0,0].scatter(arr_x, frame["rho"], label=f"{2*base*multx}", s=10, marker = "*")
- axs[0,0].legend(loc=0)
+
+ ## density
+ axs[0, 0].set_xlabel("$x$")
+ axs[0, 0].set_ylabel("$\\rho$")
+ axs[0, 0].grid(True, alpha=0.3)
+ axs[0, 0].scatter(arr_x, frame["rho"], label=f"{2 * base * multx}", s=10, marker="*")
+ axs[0, 0].legend(loc=0)
## velocity
- axs[0,1].set_xlabel("$x$")
- axs[0,1].set_ylabel("$v_\\mathrm{x}$")
- axs[0,1].grid(True, alpha=0.3)
- axs[0,1].scatter(arr_x, frame["vx"], label=f"{2*base*multx}", s=10, marker ="*" )
- axs[0,1].legend(loc=0)
-
-
+ axs[0, 1].set_xlabel("$x$")
+ axs[0, 1].set_ylabel("$v_\\mathrm{x}$")
+ axs[0, 1].grid(True, alpha=0.3)
+ axs[0, 1].scatter(arr_x, frame["vx"], label=f"{2 * base * multx}", s=10, marker="*")
+ axs[0, 1].legend(loc=0)
+
## pressure
- axs[1,1].set_xlabel("$x$")
- axs[1,1].set_ylabel("$\\mathrm{P}$")
- axs[1,1].grid(True, alpha=0.3)
- axs[1,1].scatter(arr_x, frame["P"], label=f"{2*base*multx}", s=10, marker ="*" )
- axs[1,1].legend(loc=0)
+ axs[1, 1].set_xlabel("$x$")
+ axs[1, 1].set_ylabel("$\\mathrm{P}$")
+ axs[1, 1].grid(True, alpha=0.3)
+ axs[1, 1].scatter(arr_x, frame["P"], label=f"{2 * base * multx}", s=10, marker="*")
+ axs[1, 1].legend(loc=0)
## internal energy
- axs[1,0].set_xlabel("$x$")
- axs[1,0].set_ylabel("$\\mathrm{e}_\\mathrm{int}$")
- axs[1,0].grid(True, alpha=0.3)
- axs[1,0].scatter(arr_x, frame["e_int"], label=f"{2*base*multx}", s=10, marker ="*" )
- axs[1,0].legend(loc=0)
-
+ axs[1, 0].set_xlabel("$x$")
+ axs[1, 0].set_ylabel("$\\mathrm{e}_\\mathrm{int}$")
+ axs[1, 0].grid(True, alpha=0.3)
+ axs[1, 0].scatter(arr_x, frame["e_int"], label=f"{2 * base * multx}", s=10, marker="*")
+ axs[1, 0].legend(loc=0)
+
plt.tight_layout()
- plt.savefig(os.path.join(output_folder, f"Interacting_blast_wave_test_at_{timestep_lists[i]}_nx_reso_{base*2*multx}.png"))
+ plt.savefig(
+ os.path.join(
+ output_folder,
+ f"Interacting_blast_wave_test_at_{timestep_lists[i]}_nx_reso_{base * 2 * multx}.png",
+ )
+ )
plt.close(fig)
# def gif_results(data, tmax, case_anim="inter-blast"):
@@ -185,7 +189,6 @@ def run_1d_interacting_blast_wave_test(
# axs[0,1].set_ylim(vx_min - vx_margin, vx_max + vx_margin)
# axs[0,1].grid(True, alpha=0.3)
-
# (line_rho,) = axs[0,0].plot(arr_x, data[0], label="$\\rho$", linewidth=2, color="C0")
# axs[0,0].legend()
@@ -213,4 +216,4 @@ def run_1d_interacting_blast_wave_test(
plot = run_and_plot()
-run_1d_interacting_blast_wave_test(8, 2<<2, 1)
+run_1d_interacting_blast_wave_test(8, 2 << 2, 1)
diff --git a/examples/TO_MIGRATE/ramses/shu_osher.py b/examples/TO_MIGRATE/ramses/shu_osher.py
index 2fbed524..d81d23f5 100644
--- a/examples/TO_MIGRATE/ramses/shu_osher.py
+++ b/examples/TO_MIGRATE/ramses/shu_osher.py
@@ -1,5 +1,3 @@
-
-
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import numpy as np
@@ -12,9 +10,6 @@ ctx.pdata_layout_new()
model = shamrock.get_Model_Ramses(context=ctx, vector_type="f64_3", grid_repr="i64_3")
import os
-import matplotlib.pyplot as plt
-import numpy as np
-
import shamrock
# If we use the shamrock executable to run this script instead of the python interpreter,
@@ -24,7 +19,6 @@ if not shamrock.sys.is_initialized():
shamrock.sys.init("0:0")
-
def run_1d_Shu_Osher_test(
nblocks,
nxbox,
@@ -71,37 +65,33 @@ def run_1d_Shu_Osher_test(
model.init_scheduler(int(1e7), 1)
model.make_base_grid((0, 0, 0), (sz, sz, sz), (base * multx, base * multy, base * multz))
-
x_s = 0.2
A0 = 0.2
- fp = 5.
-
-
+ fp = 5.0
def rho_map(rmin, rmax):
- x,y,z = rmin
- rho = rho=1 + A0 * np.sin(fp * np.pi* x)
+ x, y, z = rmin
+ rho = rho = 1 + A0 * np.sin(fp * np.pi * x)
if x < x_s:
- rho=3.857143
+ rho = 3.857143
return rho
-
def rhovel_map(rmin, rmax):
- rho = rho_map(rmin,rmax)
- x,y,z = rmin
- rhovec=(0,0,0)
+ rho = rho_map(rmin, rmax)
+ x, y, z = rmin
+ rhovec = (0, 0, 0)
if x < x_s:
rhovec = (rho * 2.629369, 0, 0)
return rhovec
def rhoetot_map(rmin, rmax):
- rho = rho_map(rmin,rmax)
+ rho = rho_map(rmin, rmax)
rhov = rhovel_map(rmin, rmax)
- rhoe_kin = 0.5 * (rhov[0]**2)/rho
- rhoe_int = 1.0/(gamma - 1.0)
+ rhoe_kin = 0.5 * (rhov[0] ** 2) / rho
+ rhoe_int = 1.0 / (gamma - 1.0)
x, y, z = rmin
if x < x_s:
- rhoe_int = 10.33333/(gamma - 1.0)
+ rhoe_int = 10.33333 / (gamma - 1.0)
return rhoe_int + rhoe_kin
model.set_field_value_lambda_f64("rho", rho_map)
@@ -109,73 +99,80 @@ def run_1d_Shu_Osher_test(
model.set_field_value_lambda_f64_3("rhovel", rhovel_map)
def analysis():
-
+
results = []
# for i in range(timestamps+1):
- for dt in timestep_list:
+ for dt in timestep_list:
# model.evolve_until(dt_evolve * i)
model.evolve_until(dt)
- rho_vals = model.render_slice("rho", "f64", positions_plot)
+ rho_vals = model.render_slice("rho", "f64", positions_plot)
rhov_vals = model.render_slice("rhovel", "f64_3", positions_plot)
rhoetot_vals = model.render_slice("rhoetot", "f64", positions_plot)
vx = np.array(rhov_vals)[:, 0] / np.array(rho_vals)
P = (np.array(rhoetot_vals) - 0.5 * np.array(rho_vals) * vx**2) * (gamma - 1)
- e_int = P/((gamma - 1.) * np.array(rho_vals))
+ e_int = P / ((gamma - 1.0) * np.array(rho_vals))
results_dic = {
- "rho": np.array(rho_vals),
- "vx": np.array(vx),
- "P": np.array(P),
- "e_int":np.array(e_int)
- }
+ "rho": np.array(rho_vals),
+ "vx": np.array(vx),
+ "P": np.array(P),
+ "e_int": np.array(e_int),
+ }
results.append(results_dic)
-
- output = np.column_stack((np.array(rho_vals), np.array(vx), np.array(P), np.array(e_int)))
- filename= f"data_Shu_Osher_reso_{2*base*multx}_at_{dt}.txt"
- np.savetxt(os.path.join(output_folder,filename),
- output,
- fmt=["%.10f", "%.10f", "%.10f", "%.10f"],
- header="rho vx P e_int",
- )
- return results
+ output = np.column_stack(
+ (np.array(rho_vals), np.array(vx), np.array(P), np.array(e_int))
+ )
+ filename = f"data_Shu_Osher_reso_{2 * base * multx}_at_{dt}.txt"
+ np.savetxt(
+ os.path.join(output_folder, filename),
+ output,
+ fmt=["%.10f", "%.10f", "%.10f", "%.10f"],
+ header="rho vx P e_int",
+ )
+ return results
def plot_results(data):
arr_x = [x[0] for x in positions_plot]
for i, frame in enumerate(data):
fig, axs = plt.subplots(2, 2, figsize=(8, 8), dpi=100, constrained_layout=True)
-
- ## density
- axs[0,0].set_xlabel("$x$")
- axs[0,0].set_ylabel("$\\rho$")
- axs[0,0].grid(True, alpha=0.3)
- axs[0,0].scatter(arr_x, frame["rho"], label=f"{2*base*multx}", s=10, marker = "*")
- axs[0,0].legend(loc=0)
+
+ ## density
+ axs[0, 0].set_xlabel("$x$")
+ axs[0, 0].set_ylabel("$\\rho$")
+ axs[0, 0].grid(True, alpha=0.3)
+ axs[0, 0].scatter(arr_x, frame["rho"], label=f"{2 * base * multx}", s=10, marker="*")
+ axs[0, 0].legend(loc=0)
## velocity
- axs[0,1].set_xlabel("$x$")
- axs[0,1].set_ylabel("$v_\\mathrm{x}$")
- axs[0,1].grid(True, alpha=0.3)
- axs[0,1].scatter(arr_x, frame["vx"], label=f"{2*base*multx}", s=10, marker ="*" )
- axs[0,1].legend(loc=0)
-
-
+ axs[0, 1].set_xlabel("$x$")
+ axs[0, 1].set_ylabel("$v_\\mathrm{x}$")
+ axs[0, 1].grid(True, alpha=0.3)
+ axs[0, 1].scatter(arr_x, frame["vx"], label=f"{2 * base * multx}", s=10, marker="*")
+ axs[0, 1].legend(loc=0)
+
## pressure
- axs[1,1].set_xlabel("$x$")
- axs[1,1].set_ylabel("$\\mathrm{P}$")
- axs[1,1].grid(True, alpha=0.3)
- axs[1,1].scatter(arr_x, frame["P"], label=f"{2*base*multx}", s=10, marker ="*" )
- axs[1,1].legend(loc=0)
+ axs[1, 1].set_xlabel("$x$")
+ axs[1, 1].set_ylabel("$\\mathrm{P}$")
+ axs[1, 1].grid(True, alpha=0.3)
+ axs[1, 1].scatter(arr_x, frame["P"], label=f"{2 * base * multx}", s=10, marker="*")
+ axs[1, 1].legend(loc=0)
## internal energy
- axs[1,0].set_xlabel("$x$")
- axs[1,0].set_ylabel("$\\mathrm{e}_\\mathrm{int}$")
- axs[1,0].grid(True, alpha=0.3)
- axs[1,0].scatter(arr_x, frame["e_int"], label=f"{2*base*multx}", s=10, marker ="*" )
- axs[1,0].legend(loc=0)
-
+ axs[1, 0].set_xlabel("$x$")
+ axs[1, 0].set_ylabel("$\\mathrm{e}_\\mathrm{int}$")
+ axs[1, 0].grid(True, alpha=0.3)
+ axs[1, 0].scatter(arr_x, frame["e_int"], label=f"{2 * base * multx}", s=10, marker="*")
+ axs[1, 0].legend(loc=0)
+
plt.tight_layout()
- plt.savefig(os.path.join(output_folder, f"Shu_Osher_test_at_{timestep_list[i]}_nx_reso_{base*2*multx}.png"))
+ plt.savefig(
+ os.path.join(
+ output_folder,
+ f"Shu_Osher_test_at_{timestep_list[i]}_nx_reso_{base * 2 * multx}.png",
+ )
+ )
plt.close(fig)
+
def run_and_plot():
data = analysis()
@@ -185,4 +182,4 @@ def run_1d_Shu_Osher_test(
plot = run_and_plot()
-run_1d_Shu_Osher_test(8, 2<<5, 1)
\ No newline at end of file
+run_1d_Shu_Osher_test(8, 2 << 5, 1)
diff --git a/examples/TO_MIGRATE/ramses/sod_amr_thesis.py b/examples/TO_MIGRATE/ramses/sod_amr_thesis.py
index 42152416..2b3dacb7 100644
--- a/examples/TO_MIGRATE/ramses/sod_amr_thesis.py
+++ b/examples/TO_MIGRATE/ramses/sod_amr_thesis.py
@@ -11,6 +11,7 @@ ctx.pdata_layout_new()
model = shamrock.get_Model_Ramses(context=ctx, vector_type="f64_3", grid_repr="i64_3")
+
def run_case(nb_blocks, amr_lev, case="amr"):
multx = 1
multy = 1
@@ -31,11 +32,10 @@ def run_case(nb_blocks, amr_lev, case="amr"):
cfg.set_boundary_condition("z", "reflective")
cfg.set_riemann_solver_hllc()
-
cfg.set_slope_lim_minmod()
cfg.set_face_time_interpolation(True)
- if(case == "amr"):
+ if case == "amr":
err_min = 0.25
err_max = 0.15
cfg.set_amr_mode_pseudo_gradient_based(error_min=err_min, error_max=err_max)
@@ -47,7 +47,6 @@ def run_case(nb_blocks, amr_lev, case="amr"):
(0, 0, 0), (cell_size, cell_size, cell_size), (base * multx, base * multy, base * multz)
)
-
def rho_map(rmin, rmax):
x, y, z = rmin
if x < 0.5:
@@ -55,11 +54,9 @@ def run_case(nb_blocks, amr_lev, case="amr"):
else:
return 0.125
-
etot_L = 1.0 / (gamma - 1)
etot_R = 0.1 / (gamma - 1)
-
def rhoetot_map(rmin, rmax):
x, y, z = rmin
if x < 0.5:
@@ -67,16 +64,13 @@ def run_case(nb_blocks, amr_lev, case="amr"):
else:
return etot_R
-
def rhovel_map(rmin, rmax):
return (0, 0, 0)
-
model.set_field_value_lambda_f64("rho", rho_map)
model.set_field_value_lambda_f64("rhoetot", rhoetot_map)
model.set_field_value_lambda_f64_3("rhovel", rhovel_map)
-
def convert_to_cell_coords(dic):
cmin = dic["cell_min"]
@@ -117,7 +111,6 @@ def run_case(nb_blocks, amr_lev, case="amr"):
return dic
-
t_target = 0.245
dt = 0
@@ -198,20 +191,22 @@ def run_case(nb_blocks, amr_lev, case="amr"):
## density
ax00_1 = axs[0, 0]
- if(case == "amr"):
+ if case == "amr":
ax00_2 = ax00_1.twinx()
ax00_2.set_ylim(np.floor(l.min()), np.ceil(l.max())) # optional but important
ax00_2.yaxis.set_major_locator(ticker.MultipleLocator(1))
ax00_2.yaxis.set_major_formatter(ticker.FormatStrFormatter("%d"))
- ax00_1.scatter(X, rho, rasterized=True, s=12 * np.ones(X.shape), color="red", label="numeric")
+ ax00_1.scatter(
+ X, rho, rasterized=True, s=12 * np.ones(X.shape), color="red", label="numeric"
+ )
ax00_1.plot(arr_x, arr_rho, ls="--", lw=2.0, color="black", label="exact")
ax00_1.set_xlabel("$X$")
ax00_1.set_ylabel("density")
ax00_1.legend(loc=0)
ax00_1.grid()
- if(case == "amr"):
+ if case == "amr":
idx = np.argsort(X)
ax00_2.plot(X[idx], l[idx], color="purple", marker="*", linewidth=1.0, ls="-.")
ax00_2.set_ylabel("AMR level")
@@ -219,7 +214,7 @@ def run_case(nb_blocks, amr_lev, case="amr"):
## velocity
ax01_1 = axs[0, 1]
- if(case=="amr"):
+ if case == "amr":
ax01_2 = ax01_1.twinx()
ax01_2.set_ylim(np.floor(l.min()), np.ceil(l.max())) # optional but important
ax01_2.yaxis.set_major_locator(ticker.MultipleLocator(1))
@@ -232,7 +227,7 @@ def run_case(nb_blocks, amr_lev, case="amr"):
ax01_1.legend(loc=0)
ax01_1.grid()
- if(case=="amr"):
+ if case == "amr":
idx = np.argsort(X)
ax01_2.plot(X[idx], l[idx], color="purple", marker="*", linewidth=1.0, ls="-.")
ax01_2.set_ylabel("AMR level")
@@ -240,7 +235,7 @@ def run_case(nb_blocks, amr_lev, case="amr"):
## pressure
ax11_1 = axs[1, 1]
- if(case =="amr"):
+ if case == "amr":
ax11_2 = ax11_1.twinx()
ax11_2.set_ylim(np.floor(l.min()), np.ceil(l.max())) # optional but important
ax11_2.yaxis.set_major_locator(ticker.MultipleLocator(1))
@@ -254,8 +249,8 @@ def run_case(nb_blocks, amr_lev, case="amr"):
ax11_1.set_ylabel("pressure")
ax11_1.legend(loc=0)
ax11_1.grid()
-
- if(case == "amr"):
+
+ if case == "amr":
idx = np.argsort(X)
ax11_2.plot(X[idx], l[idx], color="purple", marker="*", linewidth=1.0, ls="-.")
ax11_2.set_ylabel("AMR level")
@@ -263,14 +258,19 @@ def run_case(nb_blocks, amr_lev, case="amr"):
## internal energy
ax10_1 = axs[1, 0]
- if(case=="amr"):
+ if case == "amr":
ax10_2 = ax10_1.twinx()
ax10_2.set_ylim(np.floor(l.min()), np.ceil(l.max())) # optional but important
ax10_2.yaxis.set_major_locator(ticker.MultipleLocator(1))
ax10_2.yaxis.set_major_formatter(ticker.FormatStrFormatter("%d"))
ax10_1.scatter(
- X, internal_energy, rasterized=True, s=12 * np.ones(X.shape), color="red", label="numeric"
+ X,
+ internal_energy,
+ rasterized=True,
+ s=12 * np.ones(X.shape),
+ color="red",
+ label="numeric",
)
ax10_1.plot(
arr_x, arr_P / (arr_rho * (gamma - 1.0)), ls="--", lw=2.0, color="black", label="exact"
@@ -280,7 +280,7 @@ def run_case(nb_blocks, amr_lev, case="amr"):
ax10_1.legend(loc=0)
ax10_1.grid()
- if(case=="amr"):
+ if case == "amr":
idx = np.argsort(X)
ax10_2.plot(X[idx], l[idx], color="purple", marker="*", linewidth=1.0, ls="-.")
ax10_2.set_ylabel("AMR level")
@@ -289,4 +289,4 @@ def run_case(nb_blocks, amr_lev, case="amr"):
plt.savefig(f"sod-{case}.pdf")
-run_case(16, 3,"uniform")
+run_case(16, 3, "uniform")
diff --git a/src/shammodels/ramses/CMakeLists.txt b/src/shammodels/ramses/CMakeLists.txt
index d3e05d2b..56f1a5cb 100644
--- a/src/shammodels/ramses/CMakeLists.txt
+++ b/src/shammodels/ramses/CMakeLists.txt
@@ -44,7 +44,6 @@ set(Sources
src/modules/SumFluxHydro.cpp
src/modules/SumFluxDust.cpp
src/SolverConfig.cpp
-
src/modules/NodeAXPY.cpp
src/modules/NodeAYPX.cpp
src/modules/NodeHadamardProd.cpp
diff --git a/src/shammodels/ramses/src/modules/NodeJacobiPreconditioner.cpp b/src/shammodels/ramses/src/modules/NodeJacobiPreconditioner.cpp
index 09f9993e..e6afe686 100644
--- a/src/shammodels/ramses/src/modules/NodeJacobiPreconditioner.cpp
+++ b/src/shammodels/ramses/src/modules/NodeJacobiPreconditioner.cpp
@@ -46,7 +46,7 @@ namespace {
u32 i, const T *__restrict csize, const T *__restrict x, T *__restrict y) {
u32 block_id = i / block_size;
T dV = csize[block_id];
- y[i] = (dV * dV) * (x[i]) / 6;
+ y[i] = (dV * dV) * (x[i]) / 6;
});
}
};
|
This PR is the main from which modular ones will be extract from.
sedov_amr_clean_v1.mp4